Feb 13 15:17:45.938253 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:17:45.938274 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025 Feb 13 15:17:45.938284 kernel: KASLR enabled Feb 13 15:17:45.938290 kernel: efi: EFI v2.7 by EDK II Feb 13 15:17:45.938296 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Feb 13 15:17:45.938301 kernel: random: crng init done Feb 13 15:17:45.938308 kernel: secureboot: Secure boot disabled Feb 13 15:17:45.938314 kernel: ACPI: Early table checksum verification disabled Feb 13 15:17:45.938320 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 15:17:45.938328 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:17:45.938335 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:45.938340 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:45.938346 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:45.938352 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:45.938360 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:45.938368 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:45.938374 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:45.938381 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:45.938387 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:17:45.938393 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 15:17:45.938399 kernel: NUMA: Failed to initialise from firmware Feb 13 15:17:45.938405 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:17:45.938412 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 15:17:45.938418 kernel: Zone ranges: Feb 13 15:17:45.938435 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:17:45.938460 kernel: DMA32 empty Feb 13 15:17:45.938466 kernel: Normal empty Feb 13 15:17:45.938473 kernel: Movable zone start for each node Feb 13 15:17:45.938479 kernel: Early memory node ranges Feb 13 15:17:45.938486 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 15:17:45.938492 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 15:17:45.938498 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 15:17:45.938504 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 15:17:45.938510 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 15:17:45.938516 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 15:17:45.938522 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 15:17:45.938528 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:17:45.938536 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 15:17:45.938542 kernel: psci: probing for conduit method from ACPI. Feb 13 15:17:45.938549 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:17:45.938557 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:17:45.938564 kernel: psci: Trusted OS migration not required Feb 13 15:17:45.938570 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:17:45.938579 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:17:45.938586 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:17:45.938592 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:17:45.938599 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 15:17:45.938606 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:17:45.938613 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:17:45.938619 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:17:45.938626 kernel: CPU features: detected: Spectre-v4 Feb 13 15:17:45.938632 kernel: CPU features: detected: Spectre-BHB Feb 13 15:17:45.938639 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:17:45.938647 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:17:45.938653 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:17:45.938660 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:17:45.938667 kernel: alternatives: applying boot alternatives Feb 13 15:17:45.938675 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:17:45.938682 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:17:45.938689 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:17:45.938695 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:17:45.938702 kernel: Fallback order for Node 0: 0 Feb 13 15:17:45.938708 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 15:17:45.938715 kernel: Policy zone: DMA Feb 13 15:17:45.938722 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:17:45.938729 kernel: software IO TLB: area num 4. Feb 13 15:17:45.938736 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 15:17:45.938743 kernel: Memory: 2386324K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185964K reserved, 0K cma-reserved) Feb 13 15:17:45.938749 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:17:45.938756 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:17:45.938763 kernel: rcu: RCU event tracing is enabled. Feb 13 15:17:45.938770 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:17:45.938777 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:17:45.938783 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:17:45.938790 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:17:45.938797 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:17:45.938805 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:17:45.938811 kernel: GICv3: 256 SPIs implemented Feb 13 15:17:45.938818 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:17:45.938824 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:17:45.938831 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:17:45.938838 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:17:45.938844 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:17:45.938851 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:17:45.938857 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:17:45.938864 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 15:17:45.938871 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 15:17:45.938879 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:17:45.938886 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:17:45.938892 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:17:45.938899 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:17:45.938905 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:17:45.938912 kernel: arm-pv: using stolen time PV Feb 13 15:17:45.938924 kernel: Console: colour dummy device 80x25 Feb 13 15:17:45.938933 kernel: ACPI: Core revision 20230628 Feb 13 15:17:45.938941 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:17:45.938947 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:17:45.938957 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:17:45.938964 kernel: landlock: Up and running. Feb 13 15:17:45.938971 kernel: SELinux: Initializing. Feb 13 15:17:45.938978 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:17:45.938985 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:17:45.938992 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:17:45.938999 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:17:45.939006 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:17:45.939013 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:17:45.939035 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:17:45.939041 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:17:45.939048 kernel: Remapping and enabling EFI services. Feb 13 15:17:45.939056 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:17:45.939063 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:17:45.939069 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:17:45.939076 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 15:17:45.939083 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:17:45.939090 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:17:45.939097 kernel: Detected PIPT I-cache on CPU2 Feb 13 15:17:45.939105 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 15:17:45.939112 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 15:17:45.939124 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:17:45.939132 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 15:17:45.939139 kernel: Detected PIPT I-cache on CPU3 Feb 13 15:17:45.939146 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 15:17:45.939154 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 15:17:45.939161 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:17:45.939169 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 15:17:45.939177 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:17:45.939185 kernel: SMP: Total of 4 processors activated. Feb 13 15:17:45.939192 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:17:45.939199 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:17:45.939207 kernel: CPU features: detected: Common not Private translations Feb 13 15:17:45.939214 kernel: CPU features: detected: CRC32 instructions Feb 13 15:17:45.939221 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:17:45.939228 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:17:45.939237 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:17:45.939244 kernel: CPU features: detected: Privileged Access Never Feb 13 15:17:45.939251 kernel: CPU features: detected: RAS Extension Support Feb 13 15:17:45.939258 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:17:45.939278 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:17:45.939286 kernel: alternatives: applying system-wide alternatives Feb 13 15:17:45.939294 kernel: devtmpfs: initialized Feb 13 15:17:45.939301 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:17:45.939309 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:17:45.939318 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:17:45.939325 kernel: SMBIOS 3.0.0 present. Feb 13 15:17:45.939332 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 15:17:45.939340 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:17:45.939347 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:17:45.939355 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:17:45.939362 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:17:45.939370 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:17:45.939377 kernel: audit: type=2000 audit(0.022:1): state=initialized audit_enabled=0 res=1 Feb 13 15:17:45.939386 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:17:45.939393 kernel: cpuidle: using governor menu Feb 13 15:17:45.939401 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:17:45.939408 kernel: ASID allocator initialised with 32768 entries Feb 13 15:17:45.939416 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:17:45.939484 kernel: Serial: AMBA PL011 UART driver Feb 13 15:17:45.939494 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:17:45.939501 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:17:45.939509 kernel: Modules: 508960 pages in range for PLT usage Feb 13 15:17:45.939518 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:17:45.939526 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:17:45.939533 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:17:45.939540 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:17:45.939548 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:17:45.939555 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:17:45.939562 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:17:45.939570 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:17:45.939577 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:17:45.939586 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:17:45.939593 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:17:45.939601 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:17:45.939608 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:17:45.939616 kernel: ACPI: Interpreter enabled Feb 13 15:17:45.939623 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:17:45.939631 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:17:45.939638 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:17:45.939645 kernel: printk: console [ttyAMA0] enabled Feb 13 15:17:45.939654 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:17:45.939806 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:17:45.939880 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:17:45.939961 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:17:45.940029 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:17:45.940105 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:17:45.940115 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:17:45.940126 kernel: PCI host bridge to bus 0000:00 Feb 13 15:17:45.940211 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:17:45.940272 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:17:45.940331 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:17:45.940393 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:17:45.940514 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:17:45.940606 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:17:45.940680 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 15:17:45.940765 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 15:17:45.940831 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:17:45.940900 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:17:45.940980 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 15:17:45.941058 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 15:17:45.941118 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:17:45.941178 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:17:45.941236 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:17:45.941246 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:17:45.941253 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:17:45.941260 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:17:45.941268 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:17:45.941275 kernel: iommu: Default domain type: Translated Feb 13 15:17:45.941282 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:17:45.941291 kernel: efivars: Registered efivars operations Feb 13 15:17:45.941299 kernel: vgaarb: loaded Feb 13 15:17:45.941306 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:17:45.941313 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:17:45.941321 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:17:45.941328 kernel: pnp: PnP ACPI init Feb 13 15:17:45.941400 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:17:45.941410 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:17:45.941419 kernel: NET: Registered PF_INET protocol family Feb 13 15:17:45.941458 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:17:45.941466 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:17:45.941473 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:17:45.941481 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:17:45.941488 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:17:45.941495 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:17:45.941503 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:17:45.941510 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:17:45.941519 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:17:45.941526 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:17:45.941533 kernel: kvm [1]: HYP mode not available Feb 13 15:17:45.941540 kernel: Initialise system trusted keyrings Feb 13 15:17:45.941548 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:17:45.941555 kernel: Key type asymmetric registered Feb 13 15:17:45.941562 kernel: Asymmetric key parser 'x509' registered Feb 13 15:17:45.941569 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:17:45.941576 kernel: io scheduler mq-deadline registered Feb 13 15:17:45.941584 kernel: io scheduler kyber registered Feb 13 15:17:45.941592 kernel: io scheduler bfq registered Feb 13 15:17:45.941599 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:17:45.941606 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:17:45.941613 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:17:45.941685 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 15:17:45.941713 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:17:45.941720 kernel: thunder_xcv, ver 1.0 Feb 13 15:17:45.941727 kernel: thunder_bgx, ver 1.0 Feb 13 15:17:45.941736 kernel: nicpf, ver 1.0 Feb 13 15:17:45.941743 kernel: nicvf, ver 1.0 Feb 13 15:17:45.941813 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:17:45.941872 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:17:45 UTC (1739459865) Feb 13 15:17:45.941881 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:17:45.941889 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:17:45.941896 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:17:45.941903 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:17:45.941913 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:17:45.941927 kernel: Segment Routing with IPv6 Feb 13 15:17:45.941935 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:17:45.941943 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:17:45.941950 kernel: Key type dns_resolver registered Feb 13 15:17:45.941957 kernel: registered taskstats version 1 Feb 13 15:17:45.941964 kernel: Loading compiled-in X.509 certificates Feb 13 15:17:45.941972 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51' Feb 13 15:17:45.941979 kernel: Key type .fscrypt registered Feb 13 15:17:45.941987 kernel: Key type fscrypt-provisioning registered Feb 13 15:17:45.941995 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:17:45.942002 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:17:45.942009 kernel: ima: No architecture policies found Feb 13 15:17:45.942017 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:17:45.942024 kernel: clk: Disabling unused clocks Feb 13 15:17:45.942031 kernel: Freeing unused kernel memory: 39680K Feb 13 15:17:45.942039 kernel: Run /init as init process Feb 13 15:17:45.942046 kernel: with arguments: Feb 13 15:17:45.942054 kernel: /init Feb 13 15:17:45.942061 kernel: with environment: Feb 13 15:17:45.942068 kernel: HOME=/ Feb 13 15:17:45.942075 kernel: TERM=linux Feb 13 15:17:45.942082 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:17:45.942091 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:17:45.942100 systemd[1]: Detected virtualization kvm. Feb 13 15:17:45.942108 systemd[1]: Detected architecture arm64. Feb 13 15:17:45.942117 systemd[1]: Running in initrd. Feb 13 15:17:45.942124 systemd[1]: No hostname configured, using default hostname. Feb 13 15:17:45.942131 systemd[1]: Hostname set to . Feb 13 15:17:45.942139 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:17:45.942147 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:17:45.942155 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:17:45.942162 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:17:45.942170 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:17:45.942180 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:17:45.942188 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:17:45.942195 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:17:45.942204 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:17:45.942212 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:17:45.942220 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:17:45.942228 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:17:45.942237 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:17:45.942245 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:17:45.942253 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:17:45.942260 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:17:45.942268 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:17:45.942276 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:17:45.942284 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:17:45.942292 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:17:45.942301 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:17:45.942309 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:17:45.942317 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:17:45.942325 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:17:45.942333 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:17:45.942341 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:17:45.942348 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:17:45.942356 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:17:45.942364 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:17:45.942373 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:17:45.942383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:17:45.942401 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:17:45.942410 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:17:45.942418 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:17:45.942434 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:17:45.942445 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:17:45.942476 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 15:17:45.942497 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:17:45.942505 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:17:45.942514 systemd-journald[239]: Journal started Feb 13 15:17:45.942532 systemd-journald[239]: Runtime Journal (/run/log/journal/b23277a3e33c426693a70cbfb08c15ab) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:17:45.928952 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 15:17:45.947483 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:17:45.947523 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:17:45.949437 kernel: Bridge firewalling registered Feb 13 15:17:45.949441 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 15:17:45.951458 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:17:45.951901 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:17:45.952876 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:17:45.957353 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:17:45.958690 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:17:45.968619 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:17:45.969879 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:17:45.971692 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:17:45.988672 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:17:45.990910 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:17:45.999476 dracut-cmdline[275]: dracut-dracut-053 Feb 13 15:17:46.003402 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:17:46.018512 systemd-resolved[277]: Positive Trust Anchors: Feb 13 15:17:46.018587 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:17:46.018618 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:17:46.024662 systemd-resolved[277]: Defaulting to hostname 'linux'. Feb 13 15:17:46.025662 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:17:46.026601 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:17:46.076471 kernel: SCSI subsystem initialized Feb 13 15:17:46.081451 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:17:46.089467 kernel: iscsi: registered transport (tcp) Feb 13 15:17:46.103443 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:17:46.103464 kernel: QLogic iSCSI HBA Driver Feb 13 15:17:46.148309 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:17:46.160627 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:17:46.178827 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:17:46.178894 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:17:46.178923 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:17:46.225484 kernel: raid6: neonx8 gen() 15657 MB/s Feb 13 15:17:46.242479 kernel: raid6: neonx4 gen() 15432 MB/s Feb 13 15:17:46.263016 kernel: raid6: neonx2 gen() 13236 MB/s Feb 13 15:17:46.279478 kernel: raid6: neonx1 gen() 10019 MB/s Feb 13 15:17:46.296560 kernel: raid6: int64x8 gen() 6956 MB/s Feb 13 15:17:46.313471 kernel: raid6: int64x4 gen() 7352 MB/s Feb 13 15:17:46.330472 kernel: raid6: int64x2 gen() 6121 MB/s Feb 13 15:17:46.347475 kernel: raid6: int64x1 gen() 5050 MB/s Feb 13 15:17:46.347542 kernel: raid6: using algorithm neonx8 gen() 15657 MB/s Feb 13 15:17:46.364478 kernel: raid6: .... xor() 11928 MB/s, rmw enabled Feb 13 15:17:46.364541 kernel: raid6: using neon recovery algorithm Feb 13 15:17:46.369468 kernel: xor: measuring software checksum speed Feb 13 15:17:46.369524 kernel: 8regs : 19182 MB/sec Feb 13 15:17:46.370558 kernel: 32regs : 18423 MB/sec Feb 13 15:17:46.370583 kernel: arm64_neon : 27079 MB/sec Feb 13 15:17:46.370603 kernel: xor: using function: arm64_neon (27079 MB/sec) Feb 13 15:17:46.420465 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:17:46.433042 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:17:46.444696 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:17:46.460551 systemd-udevd[460]: Using default interface naming scheme 'v255'. Feb 13 15:17:46.463896 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:17:46.478620 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:17:46.491522 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Feb 13 15:17:46.520030 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:17:46.529619 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:17:46.577024 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:17:46.588065 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:17:46.596760 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:17:46.598403 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:17:46.601740 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:17:46.603713 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:17:46.608658 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:17:46.620819 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:17:46.625446 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 15:17:46.637856 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:17:46.637972 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:17:46.637984 kernel: GPT:9289727 != 19775487 Feb 13 15:17:46.637993 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:17:46.638002 kernel: GPT:9289727 != 19775487 Feb 13 15:17:46.638013 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:17:46.638023 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:17:46.639332 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:17:46.639464 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:17:46.642799 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:17:46.644113 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:17:46.644254 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:17:46.646500 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:17:46.653695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:17:46.657456 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (509) Feb 13 15:17:46.660464 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (513) Feb 13 15:17:46.663035 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:17:46.667721 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:17:46.678090 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:17:46.681984 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:17:46.682937 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:17:46.688101 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:17:46.697591 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:17:46.699171 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:17:46.705818 disk-uuid[549]: Primary Header is updated. Feb 13 15:17:46.705818 disk-uuid[549]: Secondary Entries is updated. Feb 13 15:17:46.705818 disk-uuid[549]: Secondary Header is updated. Feb 13 15:17:46.708440 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:17:46.729855 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:17:47.730458 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:17:47.730541 disk-uuid[550]: The operation has completed successfully. Feb 13 15:17:47.758041 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:17:47.758160 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:17:47.779630 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:17:47.783371 sh[567]: Success Feb 13 15:17:47.799448 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:17:47.831799 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:17:47.840876 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:17:47.844468 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:17:47.851880 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 Feb 13 15:17:47.851916 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:17:47.851927 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:17:47.853724 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:17:47.853743 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:17:47.857487 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:17:47.858301 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:17:47.865585 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:17:47.866969 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:17:47.873819 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:17:47.873863 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:17:47.873874 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:17:47.876457 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:17:47.884315 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:17:47.885925 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:17:47.891702 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:17:47.899606 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:17:47.970094 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:17:47.988624 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:17:48.014453 systemd-networkd[759]: lo: Link UP Feb 13 15:17:48.014465 systemd-networkd[759]: lo: Gained carrier Feb 13 15:17:48.015314 systemd-networkd[759]: Enumeration completed Feb 13 15:17:48.015467 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:17:48.015996 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:17:48.016000 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:17:48.016788 systemd-networkd[759]: eth0: Link UP Feb 13 15:17:48.016791 systemd-networkd[759]: eth0: Gained carrier Feb 13 15:17:48.016797 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:17:48.016896 systemd[1]: Reached target network.target - Network. Feb 13 15:17:48.033637 ignition[661]: Ignition 2.20.0 Feb 13 15:17:48.033648 ignition[661]: Stage: fetch-offline Feb 13 15:17:48.033685 ignition[661]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:17:48.033694 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:17:48.035484 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:17:48.033892 ignition[661]: parsed url from cmdline: "" Feb 13 15:17:48.033895 ignition[661]: no config URL provided Feb 13 15:17:48.033907 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:17:48.033915 ignition[661]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:17:48.033943 ignition[661]: op(1): [started] loading QEMU firmware config module Feb 13 15:17:48.033948 ignition[661]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:17:48.048176 ignition[661]: op(1): [finished] loading QEMU firmware config module Feb 13 15:17:48.069843 ignition[661]: parsing config with SHA512: 89e8b7b71b367285e74bae3452bcdccf481ab180da1674529b0e668a13307366b26d3abed40efa300c622040e2e59bc29ff2755d51a571ee3ea43068e544906c Feb 13 15:17:48.077450 unknown[661]: fetched base config from "system" Feb 13 15:17:48.077461 unknown[661]: fetched user config from "qemu" Feb 13 15:17:48.077934 ignition[661]: fetch-offline: fetch-offline passed Feb 13 15:17:48.078013 ignition[661]: Ignition finished successfully Feb 13 15:17:48.080270 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:17:48.081471 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:17:48.090584 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:17:48.101236 ignition[767]: Ignition 2.20.0 Feb 13 15:17:48.101247 ignition[767]: Stage: kargs Feb 13 15:17:48.101409 ignition[767]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:17:48.101420 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:17:48.102344 ignition[767]: kargs: kargs passed Feb 13 15:17:48.102390 ignition[767]: Ignition finished successfully Feb 13 15:17:48.104563 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:17:48.115597 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:17:48.125570 ignition[776]: Ignition 2.20.0 Feb 13 15:17:48.125581 ignition[776]: Stage: disks Feb 13 15:17:48.125743 ignition[776]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:17:48.125752 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:17:48.126760 ignition[776]: disks: disks passed Feb 13 15:17:48.126808 ignition[776]: Ignition finished successfully Feb 13 15:17:48.129487 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:17:48.131147 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:17:48.133548 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:17:48.134400 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:17:48.135152 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:17:48.136500 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:17:48.147575 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:17:48.158692 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:17:48.162679 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:17:48.174576 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:17:48.222449 kernel: EXT4-fs (vda9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none. Feb 13 15:17:48.223071 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:17:48.224289 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:17:48.237518 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:17:48.239062 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:17:48.240226 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:17:48.240325 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:17:48.240352 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:17:48.246623 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (796) Feb 13 15:17:48.246362 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:17:48.250154 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:17:48.250173 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:17:48.250183 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:17:48.249076 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:17:48.252519 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:17:48.254165 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:17:48.291139 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:17:48.295573 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:17:48.299764 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:17:48.303300 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:17:48.379596 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:17:48.388544 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:17:48.389888 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:17:48.394447 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:17:48.410591 ignition[909]: INFO : Ignition 2.20.0 Feb 13 15:17:48.410591 ignition[909]: INFO : Stage: mount Feb 13 15:17:48.413346 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:17:48.413346 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:17:48.413346 ignition[909]: INFO : mount: mount passed Feb 13 15:17:48.413346 ignition[909]: INFO : Ignition finished successfully Feb 13 15:17:48.412209 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:17:48.414322 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:17:48.426601 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:17:48.851337 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:17:48.863625 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:17:48.869767 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (922) Feb 13 15:17:48.869809 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:17:48.869821 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:17:48.870451 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:17:48.875383 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:17:48.873996 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:17:48.893635 ignition[939]: INFO : Ignition 2.20.0 Feb 13 15:17:48.893635 ignition[939]: INFO : Stage: files Feb 13 15:17:48.894950 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:17:48.894950 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:17:48.894950 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:17:48.900929 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:17:48.900929 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:17:48.903225 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:17:48.904256 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:17:48.904256 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:17:48.903814 unknown[939]: wrote ssh authorized keys file for user: core Feb 13 15:17:48.906990 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 15:17:48.906990 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 15:17:48.906990 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:17:48.906990 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:17:48.947996 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:17:49.062401 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:17:49.062401 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:17:49.065234 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:17:49.065234 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:17:49.065234 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:17:49.065234 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:17:49.065234 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:17:49.065234 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:17:49.065234 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:17:49.065234 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:17:49.065234 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:17:49.065234 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:17:49.065234 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:17:49.065234 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:17:49.065234 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Feb 13 15:17:49.319254 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:17:49.557595 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:17:49.557595 ignition[939]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 15:17:49.560467 ignition[939]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 15:17:49.562347 ignition[939]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 15:17:49.562347 ignition[939]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 15:17:49.562347 ignition[939]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 15:17:49.562347 ignition[939]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:17:49.562347 ignition[939]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:17:49.562347 ignition[939]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 15:17:49.562347 ignition[939]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Feb 13 15:17:49.562347 ignition[939]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:17:49.562347 ignition[939]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:17:49.562347 ignition[939]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Feb 13 15:17:49.562347 ignition[939]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:17:49.592874 ignition[939]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:17:49.597924 ignition[939]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:17:49.599306 ignition[939]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:17:49.599306 ignition[939]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:17:49.599306 ignition[939]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:17:49.599306 ignition[939]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:17:49.599306 ignition[939]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:17:49.599306 ignition[939]: INFO : files: files passed Feb 13 15:17:49.599306 ignition[939]: INFO : Ignition finished successfully Feb 13 15:17:49.600690 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:17:49.608621 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:17:49.610380 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:17:49.613629 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:17:49.613721 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:17:49.618856 initrd-setup-root-after-ignition[967]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:17:49.622289 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:17:49.622289 initrd-setup-root-after-ignition[969]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:17:49.624466 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:17:49.627238 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:17:49.628403 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:17:49.642619 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:17:49.665324 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:17:49.665466 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:17:49.667278 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:17:49.668804 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:17:49.670250 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:17:49.671086 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:17:49.690518 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:17:49.692753 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:17:49.704588 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:17:49.705588 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:17:49.707277 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:17:49.708800 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:17:49.708931 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:17:49.711089 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:17:49.712631 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:17:49.713980 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:17:49.715367 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:17:49.716991 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:17:49.718504 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:17:49.719981 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:17:49.721505 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:17:49.723079 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:17:49.724412 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:17:49.725611 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:17:49.725735 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:17:49.727552 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:17:49.729065 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:17:49.730579 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:17:49.731514 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:17:49.732967 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:17:49.733078 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:17:49.735295 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:17:49.735408 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:17:49.736999 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:17:49.738180 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:17:49.742484 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:17:49.743498 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:17:49.745207 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:17:49.746463 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:17:49.746554 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:17:49.747775 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:17:49.747852 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:17:49.749064 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:17:49.749168 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:17:49.750525 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:17:49.750623 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:17:49.758681 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:17:49.761526 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:17:49.763001 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:17:49.764048 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:17:49.765077 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:17:49.765187 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:17:49.772015 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:17:49.772113 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:17:49.776507 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:17:49.780053 ignition[993]: INFO : Ignition 2.20.0 Feb 13 15:17:49.789034 ignition[993]: INFO : Stage: umount Feb 13 15:17:49.789034 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:17:49.789034 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:17:49.789034 ignition[993]: INFO : umount: umount passed Feb 13 15:17:49.789034 ignition[993]: INFO : Ignition finished successfully Feb 13 15:17:49.790417 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:17:49.790538 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:17:49.792028 systemd[1]: Stopped target network.target - Network. Feb 13 15:17:49.793072 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:17:49.793137 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:17:49.793866 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:17:49.793912 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:17:49.794728 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:17:49.794768 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:17:49.796077 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:17:49.796119 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:17:49.797574 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:17:49.799072 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:17:49.808598 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:17:49.809462 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:17:49.809519 systemd-networkd[759]: eth0: DHCPv6 lease lost Feb 13 15:17:49.811269 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:17:49.811491 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:17:49.813796 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:17:49.813847 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:17:49.832593 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:17:49.833259 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:17:49.833326 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:17:49.834769 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:17:49.834810 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:17:49.836471 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:17:49.836517 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:17:49.838124 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:17:49.838166 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:17:49.839861 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:17:49.843026 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:17:49.843119 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:17:49.847246 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:17:49.847350 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:17:49.851777 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:17:49.851940 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:17:49.874255 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:17:49.874441 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:17:49.876234 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:17:49.876276 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:17:49.877343 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:17:49.877374 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:17:49.878622 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:17:49.878667 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:17:49.880582 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:17:49.880624 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:17:49.882483 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:17:49.882525 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:17:49.899630 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:17:49.900455 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:17:49.900519 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:17:49.904863 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:17:49.904935 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:17:49.906355 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:17:49.906405 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:17:49.907330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:17:49.907371 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:17:49.908528 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:17:49.909485 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:17:49.911219 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:17:49.913545 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:17:49.924135 systemd[1]: Switching root. Feb 13 15:17:49.956599 systemd-journald[239]: Journal stopped Feb 13 15:17:50.769820 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 15:17:50.769883 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:17:50.769897 kernel: SELinux: policy capability open_perms=1 Feb 13 15:17:50.769907 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:17:50.769920 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:17:50.769930 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:17:50.769945 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:17:50.769955 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:17:50.769964 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:17:50.769974 kernel: audit: type=1403 audit(1739459870.150:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:17:50.769985 systemd[1]: Successfully loaded SELinux policy in 34.503ms. Feb 13 15:17:50.770001 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.458ms. Feb 13 15:17:50.770014 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:17:50.770025 systemd[1]: Detected virtualization kvm. Feb 13 15:17:50.770035 systemd[1]: Detected architecture arm64. Feb 13 15:17:50.770046 systemd[1]: Detected first boot. Feb 13 15:17:50.770056 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:17:50.770067 zram_generator::config[1056]: No configuration found. Feb 13 15:17:50.770078 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:17:50.770089 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:17:50.770099 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:17:50.770113 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:17:50.770124 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:17:50.770134 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:17:50.770144 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:17:50.770155 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:17:50.770165 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:17:50.770177 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:17:50.770187 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:17:50.770199 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:17:50.770210 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:17:50.770220 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:17:50.770231 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:17:50.770241 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:17:50.770252 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:17:50.770262 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:17:50.770273 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:17:50.770283 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:17:50.770295 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:17:50.770305 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:17:50.770316 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:17:50.770327 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:17:50.770337 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:17:50.770348 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:17:50.770358 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:17:50.770368 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:17:50.770380 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:17:50.770391 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:17:50.770403 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:17:50.770413 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:17:50.770449 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:17:50.770464 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:17:50.770477 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:17:50.770489 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:17:50.770502 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:17:50.770516 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:17:50.770528 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:17:50.770541 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:17:50.770553 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:17:50.770565 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:17:50.770578 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:17:50.770590 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:17:50.770602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:17:50.770618 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:17:50.770632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:17:50.770645 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:17:50.770657 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 15:17:50.770670 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 15:17:50.770682 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:17:50.770696 kernel: loop: module loaded Feb 13 15:17:50.770707 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:17:50.770719 kernel: fuse: init (API version 7.39) Feb 13 15:17:50.770731 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:17:50.770744 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:17:50.770757 kernel: ACPI: bus type drm_connector registered Feb 13 15:17:50.770768 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:17:50.770780 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:17:50.770793 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:17:50.770805 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:17:50.770818 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:17:50.770830 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:17:50.770866 systemd-journald[1137]: Collecting audit messages is disabled. Feb 13 15:17:50.770897 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:17:50.770910 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:17:50.770921 systemd-journald[1137]: Journal started Feb 13 15:17:50.770943 systemd-journald[1137]: Runtime Journal (/run/log/journal/b23277a3e33c426693a70cbfb08c15ab) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:17:50.773456 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:17:50.775235 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:17:50.776391 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:17:50.776587 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:17:50.778150 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:17:50.778319 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:17:50.779417 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:17:50.779593 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:17:50.780657 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:17:50.780816 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:17:50.782238 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:17:50.782405 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:17:50.783457 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:17:50.783676 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:17:50.784815 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:17:50.786408 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:17:50.787652 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:17:50.799598 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:17:50.811553 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:17:50.813792 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:17:50.814682 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:17:50.818417 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:17:50.821202 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:17:50.822498 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:17:50.824147 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:17:50.825064 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:17:50.826548 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:17:50.828681 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:17:50.831205 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:17:50.832368 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:17:50.833301 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:17:50.836184 systemd-journald[1137]: Time spent on flushing to /var/log/journal/b23277a3e33c426693a70cbfb08c15ab is 12.526ms for 848 entries. Feb 13 15:17:50.836184 systemd-journald[1137]: System Journal (/var/log/journal/b23277a3e33c426693a70cbfb08c15ab) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:17:50.976624 systemd-journald[1137]: Received client request to flush runtime journal. Feb 13 15:17:50.843069 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:17:50.856931 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:17:50.860354 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Feb 13 15:17:50.860365 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Feb 13 15:17:50.861056 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:17:50.864190 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:17:50.882721 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:17:50.902753 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:17:50.905071 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:17:50.921616 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Feb 13 15:17:50.921628 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Feb 13 15:17:50.925761 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:17:50.929747 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:17:50.931293 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:17:50.978023 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:17:51.323820 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:17:51.340631 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:17:51.365771 systemd-udevd[1218]: Using default interface naming scheme 'v255'. Feb 13 15:17:51.382636 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:17:51.391619 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:17:51.404631 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:17:51.413240 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Feb 13 15:17:51.425476 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1226) Feb 13 15:17:51.468687 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:17:51.470086 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:17:51.534530 systemd-networkd[1223]: lo: Link UP Feb 13 15:17:51.534543 systemd-networkd[1223]: lo: Gained carrier Feb 13 15:17:51.535334 systemd-networkd[1223]: Enumeration completed Feb 13 15:17:51.535789 systemd-networkd[1223]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:17:51.535792 systemd-networkd[1223]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:17:51.536722 systemd-networkd[1223]: eth0: Link UP Feb 13 15:17:51.536726 systemd-networkd[1223]: eth0: Gained carrier Feb 13 15:17:51.536741 systemd-networkd[1223]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:17:51.545964 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:17:51.546935 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:17:51.549648 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:17:51.556523 systemd-networkd[1223]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:17:51.564493 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:17:51.577658 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:17:51.594601 lvm[1257]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:17:51.595331 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:17:51.618510 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:17:51.620267 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:17:51.630646 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:17:51.635708 lvm[1264]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:17:51.666004 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:17:51.667168 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:17:51.668133 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:17:51.668160 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:17:51.668917 systemd[1]: Reached target machines.target - Containers. Feb 13 15:17:51.670677 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:17:51.684628 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:17:51.686945 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:17:51.687823 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:17:51.689007 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:17:51.691378 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:17:51.693990 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:17:51.695812 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:17:51.704638 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:17:51.713491 kernel: loop0: detected capacity change from 0 to 113536 Feb 13 15:17:51.718774 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:17:51.719731 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:17:51.723462 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:17:51.763468 kernel: loop1: detected capacity change from 0 to 116808 Feb 13 15:17:51.811482 kernel: loop2: detected capacity change from 0 to 194512 Feb 13 15:17:51.864464 kernel: loop3: detected capacity change from 0 to 113536 Feb 13 15:17:51.873459 kernel: loop4: detected capacity change from 0 to 116808 Feb 13 15:17:51.880469 kernel: loop5: detected capacity change from 0 to 194512 Feb 13 15:17:51.891739 (sd-merge)[1285]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:17:51.892303 (sd-merge)[1285]: Merged extensions into '/usr'. Feb 13 15:17:51.896448 systemd[1]: Reloading requested from client PID 1272 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:17:51.896467 systemd[1]: Reloading... Feb 13 15:17:51.938468 zram_generator::config[1313]: No configuration found. Feb 13 15:17:51.969176 ldconfig[1269]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:17:52.041018 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:17:52.084308 systemd[1]: Reloading finished in 187 ms. Feb 13 15:17:52.098553 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:17:52.099784 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:17:52.119634 systemd[1]: Starting ensure-sysext.service... Feb 13 15:17:52.121677 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:17:52.127028 systemd[1]: Reloading requested from client PID 1354 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:17:52.127044 systemd[1]: Reloading... Feb 13 15:17:52.142177 systemd-tmpfiles[1355]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:17:52.142460 systemd-tmpfiles[1355]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:17:52.143112 systemd-tmpfiles[1355]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:17:52.143331 systemd-tmpfiles[1355]: ACLs are not supported, ignoring. Feb 13 15:17:52.143387 systemd-tmpfiles[1355]: ACLs are not supported, ignoring. Feb 13 15:17:52.145956 systemd-tmpfiles[1355]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:17:52.145972 systemd-tmpfiles[1355]: Skipping /boot Feb 13 15:17:52.153367 systemd-tmpfiles[1355]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:17:52.153381 systemd-tmpfiles[1355]: Skipping /boot Feb 13 15:17:52.171588 zram_generator::config[1387]: No configuration found. Feb 13 15:17:52.266724 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:17:52.311453 systemd[1]: Reloading finished in 184 ms. Feb 13 15:17:52.326739 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:17:52.342897 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:17:52.345590 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:17:52.348158 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:17:52.352366 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:17:52.357555 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:17:52.362749 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:17:52.366666 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:17:52.371759 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:17:52.379701 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:17:52.380687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:17:52.383523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:17:52.383699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:17:52.385098 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:17:52.385249 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:17:52.390467 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:17:52.393249 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:17:52.393960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:17:52.400625 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:17:52.407306 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:17:52.416919 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:17:52.426953 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:17:52.431727 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:17:52.438844 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:17:52.439801 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:17:52.442200 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:17:52.444064 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:17:52.445731 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:17:52.445910 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:17:52.447298 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:17:52.447467 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:17:52.448743 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:17:52.448914 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:17:52.450548 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:17:52.450782 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:17:52.454950 augenrules[1476]: No rules Feb 13 15:17:52.455166 systemd[1]: Finished ensure-sysext.service. Feb 13 15:17:52.456552 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:17:52.457714 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:17:52.457982 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:17:52.463662 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:17:52.463737 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:17:52.472280 systemd-resolved[1429]: Positive Trust Anchors: Feb 13 15:17:52.472355 systemd-resolved[1429]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:17:52.472387 systemd-resolved[1429]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:17:52.474718 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:17:52.475558 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:17:52.480004 systemd-resolved[1429]: Defaulting to hostname 'linux'. Feb 13 15:17:52.484940 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:17:52.485932 systemd[1]: Reached target network.target - Network. Feb 13 15:17:52.486799 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:17:52.519158 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:17:52.520162 systemd-timesyncd[1492]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:17:52.520463 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:17:52.520521 systemd-timesyncd[1492]: Initial clock synchronization to Thu 2025-02-13 15:17:52.495760 UTC. Feb 13 15:17:52.521284 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:17:52.522226 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:17:52.523155 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:17:52.524083 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:17:52.524116 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:17:52.524804 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:17:52.525741 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:17:52.526652 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:17:52.527629 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:17:52.528828 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:17:52.531330 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:17:52.533225 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:17:52.543509 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:17:52.544327 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:17:52.545054 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:17:52.545906 systemd[1]: System is tainted: cgroupsv1 Feb 13 15:17:52.545954 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:17:52.545975 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:17:52.547409 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:17:52.549639 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:17:52.551794 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:17:52.556637 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:17:52.557554 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:17:52.560145 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:17:52.568589 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:17:52.572023 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:17:52.574979 jq[1498]: false Feb 13 15:17:52.576846 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:17:52.590212 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:17:52.594006 dbus-daemon[1497]: [system] SELinux support is enabled Feb 13 15:17:52.599909 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:17:52.600954 extend-filesystems[1500]: Found loop3 Feb 13 15:17:52.600954 extend-filesystems[1500]: Found loop4 Feb 13 15:17:52.600954 extend-filesystems[1500]: Found loop5 Feb 13 15:17:52.600954 extend-filesystems[1500]: Found vda Feb 13 15:17:52.600954 extend-filesystems[1500]: Found vda1 Feb 13 15:17:52.600954 extend-filesystems[1500]: Found vda2 Feb 13 15:17:52.600954 extend-filesystems[1500]: Found vda3 Feb 13 15:17:52.600954 extend-filesystems[1500]: Found usr Feb 13 15:17:52.600954 extend-filesystems[1500]: Found vda4 Feb 13 15:17:52.600954 extend-filesystems[1500]: Found vda6 Feb 13 15:17:52.600954 extend-filesystems[1500]: Found vda7 Feb 13 15:17:52.600954 extend-filesystems[1500]: Found vda9 Feb 13 15:17:52.600954 extend-filesystems[1500]: Checking size of /dev/vda9 Feb 13 15:17:52.632325 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:17:52.612842 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:17:52.632525 extend-filesystems[1500]: Resized partition /dev/vda9 Feb 13 15:17:52.617326 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:17:52.647474 extend-filesystems[1523]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:17:52.653352 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1228) Feb 13 15:17:52.619442 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:17:52.628298 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:17:52.653658 jq[1525]: true Feb 13 15:17:52.628585 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:17:52.628883 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:17:52.629159 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:17:52.637415 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:17:52.637684 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:17:52.651787 (ntainerd)[1530]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:17:52.669545 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:17:52.669632 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:17:52.673710 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:17:52.673734 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:17:52.709674 jq[1531]: true Feb 13 15:17:52.736982 update_engine[1524]: I20250213 15:17:52.735875 1524 main.cc:92] Flatcar Update Engine starting Feb 13 15:17:52.739432 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:17:52.739691 tar[1527]: linux-arm64/helm Feb 13 15:17:52.740911 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:17:52.753958 update_engine[1524]: I20250213 15:17:52.740939 1524 update_check_scheduler.cc:74] Next update check in 10m21s Feb 13 15:17:52.742614 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:17:52.750689 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:17:52.754051 systemd-logind[1515]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:17:52.754389 extend-filesystems[1523]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:17:52.754389 extend-filesystems[1523]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:17:52.754389 extend-filesystems[1523]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:17:52.773337 extend-filesystems[1500]: Resized filesystem in /dev/vda9 Feb 13 15:17:52.759903 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:17:52.760166 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:17:52.760371 systemd-logind[1515]: New seat seat0. Feb 13 15:17:52.771404 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:17:52.799257 bash[1565]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:17:52.801605 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:17:52.801824 locksmithd[1545]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:17:52.803840 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:17:52.881748 systemd-networkd[1223]: eth0: Gained IPv6LL Feb 13 15:17:52.888668 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:17:52.890815 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:17:52.900985 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:17:52.906197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:52.911100 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:17:52.911957 containerd[1530]: time="2025-02-13T15:17:52.911115720Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:17:52.954851 containerd[1530]: time="2025-02-13T15:17:52.954624240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:52.957238 containerd[1530]: time="2025-02-13T15:17:52.956105560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:52.957238 containerd[1530]: time="2025-02-13T15:17:52.956149000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:17:52.957238 containerd[1530]: time="2025-02-13T15:17:52.956169160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:17:52.957238 containerd[1530]: time="2025-02-13T15:17:52.956350480Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:17:52.957238 containerd[1530]: time="2025-02-13T15:17:52.956371160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:52.957238 containerd[1530]: time="2025-02-13T15:17:52.956455160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:52.957238 containerd[1530]: time="2025-02-13T15:17:52.956472240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:52.957238 containerd[1530]: time="2025-02-13T15:17:52.956703200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:52.957238 containerd[1530]: time="2025-02-13T15:17:52.956718920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:52.957238 containerd[1530]: time="2025-02-13T15:17:52.956732680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:52.957238 containerd[1530]: time="2025-02-13T15:17:52.956742480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:52.956466 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:17:52.957676 containerd[1530]: time="2025-02-13T15:17:52.956824680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:52.957676 containerd[1530]: time="2025-02-13T15:17:52.957042520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:52.957676 containerd[1530]: time="2025-02-13T15:17:52.957198640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:52.957676 containerd[1530]: time="2025-02-13T15:17:52.957212440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:17:52.957676 containerd[1530]: time="2025-02-13T15:17:52.957295760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:17:52.957676 containerd[1530]: time="2025-02-13T15:17:52.957334960Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:17:52.956759 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:17:52.959349 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:17:52.965037 containerd[1530]: time="2025-02-13T15:17:52.964074640Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:17:52.965037 containerd[1530]: time="2025-02-13T15:17:52.964160560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:17:52.965037 containerd[1530]: time="2025-02-13T15:17:52.964179280Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:17:52.965037 containerd[1530]: time="2025-02-13T15:17:52.964195720Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:17:52.965037 containerd[1530]: time="2025-02-13T15:17:52.964282000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:17:52.965037 containerd[1530]: time="2025-02-13T15:17:52.964500320Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:17:52.967725 containerd[1530]: time="2025-02-13T15:17:52.967684200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:17:52.968695 containerd[1530]: time="2025-02-13T15:17:52.968667520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:17:52.969412 containerd[1530]: time="2025-02-13T15:17:52.969324120Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:17:52.969412 containerd[1530]: time="2025-02-13T15:17:52.969357200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:17:52.969412 containerd[1530]: time="2025-02-13T15:17:52.969377360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:17:52.969412 containerd[1530]: time="2025-02-13T15:17:52.969391480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:17:52.969412 containerd[1530]: time="2025-02-13T15:17:52.969405280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:17:52.969412 containerd[1530]: time="2025-02-13T15:17:52.969421240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:17:52.969589 containerd[1530]: time="2025-02-13T15:17:52.969447760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:17:52.969589 containerd[1530]: time="2025-02-13T15:17:52.969464480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:17:52.969589 containerd[1530]: time="2025-02-13T15:17:52.969477920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:17:52.969589 containerd[1530]: time="2025-02-13T15:17:52.969491080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:17:52.969589 containerd[1530]: time="2025-02-13T15:17:52.969513240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:17:52.969589 containerd[1530]: time="2025-02-13T15:17:52.969527480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:17:52.969589 containerd[1530]: time="2025-02-13T15:17:52.969541600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:17:52.969589 containerd[1530]: time="2025-02-13T15:17:52.969554840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:17:52.969589 containerd[1530]: time="2025-02-13T15:17:52.969566640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:17:52.969589 containerd[1530]: time="2025-02-13T15:17:52.969581480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:17:52.969760 containerd[1530]: time="2025-02-13T15:17:52.969596040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:17:52.969760 containerd[1530]: time="2025-02-13T15:17:52.969610440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:17:52.969760 containerd[1530]: time="2025-02-13T15:17:52.969624280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:17:52.969760 containerd[1530]: time="2025-02-13T15:17:52.969640480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:17:52.969760 containerd[1530]: time="2025-02-13T15:17:52.969652600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:17:52.969760 containerd[1530]: time="2025-02-13T15:17:52.969665920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:17:52.969760 containerd[1530]: time="2025-02-13T15:17:52.969678600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:17:52.969760 containerd[1530]: time="2025-02-13T15:17:52.969693560Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:17:52.969760 containerd[1530]: time="2025-02-13T15:17:52.969718480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:17:52.969760 containerd[1530]: time="2025-02-13T15:17:52.969731960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:17:52.969760 containerd[1530]: time="2025-02-13T15:17:52.969742800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:17:52.969950 containerd[1530]: time="2025-02-13T15:17:52.969930120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:17:52.969970 containerd[1530]: time="2025-02-13T15:17:52.969951480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:17:52.969970 containerd[1530]: time="2025-02-13T15:17:52.969962640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:17:52.970005 containerd[1530]: time="2025-02-13T15:17:52.969974680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:17:52.970005 containerd[1530]: time="2025-02-13T15:17:52.969983920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:17:52.970005 containerd[1530]: time="2025-02-13T15:17:52.970001080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:17:52.970059 containerd[1530]: time="2025-02-13T15:17:52.970011760Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:17:52.970059 containerd[1530]: time="2025-02-13T15:17:52.970021720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:17:52.971083 containerd[1530]: time="2025-02-13T15:17:52.970363040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:17:52.971083 containerd[1530]: time="2025-02-13T15:17:52.970418840Z" level=info msg="Connect containerd service" Feb 13 15:17:52.971083 containerd[1530]: time="2025-02-13T15:17:52.970481640Z" level=info msg="using legacy CRI server" Feb 13 15:17:52.971083 containerd[1530]: time="2025-02-13T15:17:52.970489280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:17:52.971083 containerd[1530]: time="2025-02-13T15:17:52.970729240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:17:52.972040 containerd[1530]: time="2025-02-13T15:17:52.971330840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:17:52.975451 containerd[1530]: time="2025-02-13T15:17:52.972876200Z" level=info msg="Start subscribing containerd event" Feb 13 15:17:52.975451 containerd[1530]: time="2025-02-13T15:17:52.972976920Z" level=info msg="Start recovering state" Feb 13 15:17:52.975451 containerd[1530]: time="2025-02-13T15:17:52.973065800Z" level=info msg="Start event monitor" Feb 13 15:17:52.975451 containerd[1530]: time="2025-02-13T15:17:52.973082120Z" level=info msg="Start snapshots syncer" Feb 13 15:17:52.975451 containerd[1530]: time="2025-02-13T15:17:52.973093480Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:17:52.975451 containerd[1530]: time="2025-02-13T15:17:52.973100880Z" level=info msg="Start streaming server" Feb 13 15:17:52.975451 containerd[1530]: time="2025-02-13T15:17:52.973447840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:17:52.975451 containerd[1530]: time="2025-02-13T15:17:52.973503240Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:17:52.973731 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:17:52.978340 containerd[1530]: time="2025-02-13T15:17:52.976951200Z" level=info msg="containerd successfully booted in 0.066669s" Feb 13 15:17:52.977018 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:17:53.130234 tar[1527]: linux-arm64/LICENSE Feb 13 15:17:53.130836 tar[1527]: linux-arm64/README.md Feb 13 15:17:53.148121 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:17:53.204157 sshd_keygen[1518]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:17:53.223980 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:17:53.233860 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:17:53.239869 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:17:53.240126 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:17:53.243030 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:17:53.256546 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:17:53.259305 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:17:53.261536 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:17:53.262905 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:17:53.459044 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:53.460401 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:17:53.461339 systemd[1]: Startup finished in 5.006s (kernel) + 3.348s (userspace) = 8.354s. Feb 13 15:17:53.464277 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:53.967730 kubelet[1632]: E0213 15:17:53.967636 1632 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:53.970296 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:53.970538 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:58.120004 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:17:58.135698 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:45680.service - OpenSSH per-connection server daemon (10.0.0.1:45680). Feb 13 15:17:58.200866 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 45680 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:17:58.202962 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:58.215519 systemd-logind[1515]: New session 1 of user core. Feb 13 15:17:58.216532 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:17:58.224675 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:17:58.234934 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:17:58.237274 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:17:58.243935 (systemd)[1652]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:17:58.317926 systemd[1652]: Queued start job for default target default.target. Feb 13 15:17:58.318306 systemd[1652]: Created slice app.slice - User Application Slice. Feb 13 15:17:58.318329 systemd[1652]: Reached target paths.target - Paths. Feb 13 15:17:58.318341 systemd[1652]: Reached target timers.target - Timers. Feb 13 15:17:58.332545 systemd[1652]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:17:58.338871 systemd[1652]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:17:58.338936 systemd[1652]: Reached target sockets.target - Sockets. Feb 13 15:17:58.338949 systemd[1652]: Reached target basic.target - Basic System. Feb 13 15:17:58.338985 systemd[1652]: Reached target default.target - Main User Target. Feb 13 15:17:58.339011 systemd[1652]: Startup finished in 89ms. Feb 13 15:17:58.339329 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:17:58.341041 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:17:58.397707 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:45684.service - OpenSSH per-connection server daemon (10.0.0.1:45684). Feb 13 15:17:58.436864 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 45684 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:17:58.438197 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:58.442452 systemd-logind[1515]: New session 2 of user core. Feb 13 15:17:58.452740 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:17:58.505400 sshd[1667]: Connection closed by 10.0.0.1 port 45684 Feb 13 15:17:58.505794 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:58.513727 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:45700.service - OpenSSH per-connection server daemon (10.0.0.1:45700). Feb 13 15:17:58.514127 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:45684.service: Deactivated successfully. Feb 13 15:17:58.516099 systemd-logind[1515]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:17:58.516632 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:17:58.517576 systemd-logind[1515]: Removed session 2. Feb 13 15:17:58.549512 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 45700 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:17:58.550737 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:58.555051 systemd-logind[1515]: New session 3 of user core. Feb 13 15:17:58.560734 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:17:58.610464 sshd[1675]: Connection closed by 10.0.0.1 port 45700 Feb 13 15:17:58.610970 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:58.620699 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:45708.service - OpenSSH per-connection server daemon (10.0.0.1:45708). Feb 13 15:17:58.621090 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:45700.service: Deactivated successfully. Feb 13 15:17:58.622875 systemd-logind[1515]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:17:58.623460 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:17:58.624832 systemd-logind[1515]: Removed session 3. Feb 13 15:17:58.655631 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 45708 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:17:58.657282 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:58.661152 systemd-logind[1515]: New session 4 of user core. Feb 13 15:17:58.672810 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:17:58.725081 sshd[1683]: Connection closed by 10.0.0.1 port 45708 Feb 13 15:17:58.725446 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:58.737739 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:45718.service - OpenSSH per-connection server daemon (10.0.0.1:45718). Feb 13 15:17:58.738140 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:45708.service: Deactivated successfully. Feb 13 15:17:58.740158 systemd-logind[1515]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:17:58.740658 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:17:58.742116 systemd-logind[1515]: Removed session 4. Feb 13 15:17:58.773231 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 45718 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:17:58.774678 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:58.778481 systemd-logind[1515]: New session 5 of user core. Feb 13 15:17:58.793761 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:17:58.860342 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:17:58.860647 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:59.196721 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:17:59.196911 (dockerd)[1714]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:17:59.451381 dockerd[1714]: time="2025-02-13T15:17:59.450971279Z" level=info msg="Starting up" Feb 13 15:17:59.692618 dockerd[1714]: time="2025-02-13T15:17:59.692571557Z" level=info msg="Loading containers: start." Feb 13 15:17:59.843444 kernel: Initializing XFRM netlink socket Feb 13 15:17:59.917688 systemd-networkd[1223]: docker0: Link UP Feb 13 15:17:59.953141 dockerd[1714]: time="2025-02-13T15:17:59.953074914Z" level=info msg="Loading containers: done." Feb 13 15:17:59.967729 dockerd[1714]: time="2025-02-13T15:17:59.967670882Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:17:59.967974 dockerd[1714]: time="2025-02-13T15:17:59.967787762Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:17:59.967974 dockerd[1714]: time="2025-02-13T15:17:59.967905001Z" level=info msg="Daemon has completed initialization" Feb 13 15:17:59.996636 dockerd[1714]: time="2025-02-13T15:17:59.996516496Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:17:59.996711 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:18:00.520354 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1388771352-merged.mount: Deactivated successfully. Feb 13 15:18:00.670732 containerd[1530]: time="2025-02-13T15:18:00.670510200Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:18:01.320281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount121487983.mount: Deactivated successfully. Feb 13 15:18:02.324218 containerd[1530]: time="2025-02-13T15:18:02.324154525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:02.324713 containerd[1530]: time="2025-02-13T15:18:02.324666929Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=32205863" Feb 13 15:18:02.325416 containerd[1530]: time="2025-02-13T15:18:02.325384878Z" level=info msg="ImageCreate event name:\"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:02.329239 containerd[1530]: time="2025-02-13T15:18:02.329171776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:02.330333 containerd[1530]: time="2025-02-13T15:18:02.330257653Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"32202661\" in 1.659707729s" Feb 13 15:18:02.330333 containerd[1530]: time="2025-02-13T15:18:02.330289625Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\"" Feb 13 15:18:02.348543 containerd[1530]: time="2025-02-13T15:18:02.348507647Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:18:03.690843 containerd[1530]: time="2025-02-13T15:18:03.690792977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:03.691808 containerd[1530]: time="2025-02-13T15:18:03.691541660Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=29383093" Feb 13 15:18:03.692452 containerd[1530]: time="2025-02-13T15:18:03.692412445Z" level=info msg="ImageCreate event name:\"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:03.695336 containerd[1530]: time="2025-02-13T15:18:03.695305618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:03.697495 containerd[1530]: time="2025-02-13T15:18:03.697364896Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"30786820\" in 1.348817962s" Feb 13 15:18:03.697495 containerd[1530]: time="2025-02-13T15:18:03.697400268Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\"" Feb 13 15:18:03.715890 containerd[1530]: time="2025-02-13T15:18:03.715858308Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:18:04.220709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:18:04.229598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:04.341264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:04.345443 (kubelet)[2002]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:18:04.449217 kubelet[2002]: E0213 15:18:04.449165 2002 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:18:04.453509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:18:04.453698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:18:04.680961 containerd[1530]: time="2025-02-13T15:18:04.680837229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:04.682470 containerd[1530]: time="2025-02-13T15:18:04.682222673Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=15766982" Feb 13 15:18:04.684182 containerd[1530]: time="2025-02-13T15:18:04.684148913Z" level=info msg="ImageCreate event name:\"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:04.687421 containerd[1530]: time="2025-02-13T15:18:04.687373622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:04.688463 containerd[1530]: time="2025-02-13T15:18:04.688419400Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"17170727\" in 972.524961ms" Feb 13 15:18:04.688510 containerd[1530]: time="2025-02-13T15:18:04.688467364Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\"" Feb 13 15:18:04.706455 containerd[1530]: time="2025-02-13T15:18:04.706406633Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:18:05.687478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1740331786.mount: Deactivated successfully. Feb 13 15:18:06.008029 containerd[1530]: time="2025-02-13T15:18:06.007909654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:06.008993 containerd[1530]: time="2025-02-13T15:18:06.008843880Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273377" Feb 13 15:18:06.009673 containerd[1530]: time="2025-02-13T15:18:06.009559810Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:06.012166 containerd[1530]: time="2025-02-13T15:18:06.012101899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:06.012830 containerd[1530]: time="2025-02-13T15:18:06.012687115Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 1.306227361s" Feb 13 15:18:06.012830 containerd[1530]: time="2025-02-13T15:18:06.012723291Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\"" Feb 13 15:18:06.032086 containerd[1530]: time="2025-02-13T15:18:06.031845208Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:18:06.608057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1357852860.mount: Deactivated successfully. Feb 13 15:18:07.114346 containerd[1530]: time="2025-02-13T15:18:07.114176367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:07.115264 containerd[1530]: time="2025-02-13T15:18:07.115234475Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 15:18:07.116044 containerd[1530]: time="2025-02-13T15:18:07.116009198Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:07.119254 containerd[1530]: time="2025-02-13T15:18:07.119203550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:07.120388 containerd[1530]: time="2025-02-13T15:18:07.120308310Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.088352694s" Feb 13 15:18:07.120388 containerd[1530]: time="2025-02-13T15:18:07.120355801Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:18:07.139661 containerd[1530]: time="2025-02-13T15:18:07.139591833Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:18:07.569686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3488449472.mount: Deactivated successfully. Feb 13 15:18:07.573647 containerd[1530]: time="2025-02-13T15:18:07.573603591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:07.574526 containerd[1530]: time="2025-02-13T15:18:07.574100205Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 15:18:07.575040 containerd[1530]: time="2025-02-13T15:18:07.574985340Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:07.577067 containerd[1530]: time="2025-02-13T15:18:07.577016729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:07.577985 containerd[1530]: time="2025-02-13T15:18:07.577912018Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 438.27861ms" Feb 13 15:18:07.577985 containerd[1530]: time="2025-02-13T15:18:07.577942039Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:18:07.596090 containerd[1530]: time="2025-02-13T15:18:07.596042490Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:18:08.157322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2755941056.mount: Deactivated successfully. Feb 13 15:18:09.457285 containerd[1530]: time="2025-02-13T15:18:09.457233054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:09.457815 containerd[1530]: time="2025-02-13T15:18:09.457752612Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Feb 13 15:18:09.458618 containerd[1530]: time="2025-02-13T15:18:09.458584242Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:09.461835 containerd[1530]: time="2025-02-13T15:18:09.461795584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:09.463173 containerd[1530]: time="2025-02-13T15:18:09.463135699Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 1.867062226s" Feb 13 15:18:09.463206 containerd[1530]: time="2025-02-13T15:18:09.463172519Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Feb 13 15:18:14.390258 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:14.401674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:14.418273 systemd[1]: Reloading requested from client PID 2228 ('systemctl') (unit session-5.scope)... Feb 13 15:18:14.418292 systemd[1]: Reloading... Feb 13 15:18:14.479464 zram_generator::config[2274]: No configuration found. Feb 13 15:18:14.571236 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:18:14.621363 systemd[1]: Reloading finished in 202 ms. Feb 13 15:18:14.660888 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:18:14.660954 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:18:14.661220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:14.663676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:14.753480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:14.757742 (kubelet)[2325]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:18:14.821784 kubelet[2325]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:18:14.821784 kubelet[2325]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:18:14.821784 kubelet[2325]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:18:14.821784 kubelet[2325]: I0213 15:18:14.819470 2325 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:18:15.502940 kubelet[2325]: I0213 15:18:15.502890 2325 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:18:15.502940 kubelet[2325]: I0213 15:18:15.502921 2325 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:18:15.503183 kubelet[2325]: I0213 15:18:15.503153 2325 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:18:15.541717 kubelet[2325]: I0213 15:18:15.541573 2325 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:18:15.541717 kubelet[2325]: E0213 15:18:15.541640 2325 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.50:6443: connect: connection refused Feb 13 15:18:15.548587 kubelet[2325]: I0213 15:18:15.548543 2325 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:18:15.549544 kubelet[2325]: I0213 15:18:15.549114 2325 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:18:15.549544 kubelet[2325]: I0213 15:18:15.549298 2325 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:18:15.549544 kubelet[2325]: I0213 15:18:15.549318 2325 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:18:15.549544 kubelet[2325]: I0213 15:18:15.549326 2325 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:18:15.550621 kubelet[2325]: I0213 15:18:15.550595 2325 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:18:15.554720 kubelet[2325]: I0213 15:18:15.554698 2325 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:18:15.554834 kubelet[2325]: I0213 15:18:15.554823 2325 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:18:15.555022 kubelet[2325]: I0213 15:18:15.555007 2325 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:18:15.555094 kubelet[2325]: I0213 15:18:15.555085 2325 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:18:15.555860 kubelet[2325]: W0213 15:18:15.555813 2325 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Feb 13 15:18:15.557281 kubelet[2325]: W0213 15:18:15.555985 2325 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Feb 13 15:18:15.557281 kubelet[2325]: E0213 15:18:15.557279 2325 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Feb 13 15:18:15.557666 kubelet[2325]: E0213 15:18:15.557238 2325 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Feb 13 15:18:15.558075 kubelet[2325]: I0213 15:18:15.558054 2325 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:18:15.558767 kubelet[2325]: I0213 15:18:15.558739 2325 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:18:15.559634 kubelet[2325]: W0213 15:18:15.559605 2325 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:18:15.560782 kubelet[2325]: I0213 15:18:15.560757 2325 server.go:1256] "Started kubelet" Feb 13 15:18:15.562726 kubelet[2325]: I0213 15:18:15.562690 2325 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:18:15.563245 kubelet[2325]: I0213 15:18:15.563001 2325 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:18:15.563447 kubelet[2325]: I0213 15:18:15.563407 2325 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:18:15.565443 kubelet[2325]: I0213 15:18:15.565397 2325 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:18:15.566776 kubelet[2325]: I0213 15:18:15.566747 2325 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:18:15.566851 kubelet[2325]: I0213 15:18:15.566840 2325 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:18:15.566916 kubelet[2325]: I0213 15:18:15.566898 2325 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:18:15.567459 kubelet[2325]: I0213 15:18:15.567375 2325 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:18:15.567682 kubelet[2325]: W0213 15:18:15.567634 2325 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Feb 13 15:18:15.567710 kubelet[2325]: E0213 15:18:15.567697 2325 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Feb 13 15:18:15.569123 kubelet[2325]: E0213 15:18:15.568568 2325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" Feb 13 15:18:15.569374 kubelet[2325]: E0213 15:18:15.569351 2325 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cd8f17ef6d58 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:18:15.560727896 +0000 UTC m=+0.799568594,LastTimestamp:2025-02-13 15:18:15.560727896 +0000 UTC m=+0.799568594,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:18:15.570015 kubelet[2325]: I0213 15:18:15.569985 2325 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:18:15.571238 kubelet[2325]: E0213 15:18:15.570765 2325 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:18:15.572346 kubelet[2325]: I0213 15:18:15.572325 2325 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:18:15.572346 kubelet[2325]: I0213 15:18:15.572344 2325 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:18:15.578988 kubelet[2325]: I0213 15:18:15.578833 2325 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:18:15.580131 kubelet[2325]: I0213 15:18:15.579782 2325 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:18:15.580131 kubelet[2325]: I0213 15:18:15.579803 2325 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:18:15.580131 kubelet[2325]: I0213 15:18:15.579821 2325 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:18:15.580131 kubelet[2325]: E0213 15:18:15.579872 2325 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:18:15.585717 kubelet[2325]: W0213 15:18:15.585676 2325 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Feb 13 15:18:15.585850 kubelet[2325]: E0213 15:18:15.585836 2325 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Feb 13 15:18:15.591897 kubelet[2325]: I0213 15:18:15.591868 2325 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:18:15.591897 kubelet[2325]: I0213 15:18:15.591895 2325 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:18:15.592036 kubelet[2325]: I0213 15:18:15.591912 2325 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:18:15.668461 kubelet[2325]: I0213 15:18:15.668420 2325 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:18:15.668914 kubelet[2325]: E0213 15:18:15.668884 2325 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Feb 13 15:18:15.680330 kubelet[2325]: E0213 15:18:15.680306 2325 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:18:15.682341 kubelet[2325]: I0213 15:18:15.682305 2325 policy_none.go:49] "None policy: Start" Feb 13 15:18:15.683190 kubelet[2325]: I0213 15:18:15.683170 2325 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:18:15.683249 kubelet[2325]: I0213 15:18:15.683216 2325 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:18:15.689443 kubelet[2325]: I0213 15:18:15.688981 2325 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:18:15.689443 kubelet[2325]: I0213 15:18:15.689242 2325 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:18:15.690777 kubelet[2325]: E0213 15:18:15.690760 2325 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:18:15.769557 kubelet[2325]: E0213 15:18:15.769461 2325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" Feb 13 15:18:15.870829 kubelet[2325]: I0213 15:18:15.870758 2325 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:18:15.871159 kubelet[2325]: E0213 15:18:15.871142 2325 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Feb 13 15:18:15.881284 kubelet[2325]: I0213 15:18:15.881246 2325 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:18:15.882434 kubelet[2325]: I0213 15:18:15.882410 2325 topology_manager.go:215] "Topology Admit Handler" podUID="d2a7a6ffe7973b741dbee045c49c1e53" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:18:15.883404 kubelet[2325]: I0213 15:18:15.883382 2325 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:18:15.969110 kubelet[2325]: I0213 15:18:15.969069 2325 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2a7a6ffe7973b741dbee045c49c1e53-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d2a7a6ffe7973b741dbee045c49c1e53\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:18:15.969110 kubelet[2325]: I0213 15:18:15.969116 2325 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:15.969256 kubelet[2325]: I0213 15:18:15.969150 2325 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:15.969256 kubelet[2325]: I0213 15:18:15.969173 2325 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:18:15.969256 kubelet[2325]: I0213 15:18:15.969192 2325 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2a7a6ffe7973b741dbee045c49c1e53-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d2a7a6ffe7973b741dbee045c49c1e53\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:18:15.969256 kubelet[2325]: I0213 15:18:15.969244 2325 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:15.969348 kubelet[2325]: I0213 15:18:15.969292 2325 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:15.969348 kubelet[2325]: I0213 15:18:15.969316 2325 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:15.969385 kubelet[2325]: I0213 15:18:15.969357 2325 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2a7a6ffe7973b741dbee045c49c1e53-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d2a7a6ffe7973b741dbee045c49c1e53\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:18:16.170320 kubelet[2325]: E0213 15:18:16.170285 2325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" Feb 13 15:18:16.187524 kubelet[2325]: E0213 15:18:16.187486 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:16.188150 kubelet[2325]: E0213 15:18:16.187909 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:16.188253 containerd[1530]: time="2025-02-13T15:18:16.188210296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d2a7a6ffe7973b741dbee045c49c1e53,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:16.188631 containerd[1530]: time="2025-02-13T15:18:16.188249963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:16.188688 kubelet[2325]: E0213 15:18:16.188664 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:16.189028 containerd[1530]: time="2025-02-13T15:18:16.189001824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:16.272502 kubelet[2325]: I0213 15:18:16.272413 2325 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:18:16.272765 kubelet[2325]: E0213 15:18:16.272732 2325 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Feb 13 15:18:16.636490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount429301726.mount: Deactivated successfully. Feb 13 15:18:16.644174 containerd[1530]: time="2025-02-13T15:18:16.644123886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:16.646698 containerd[1530]: time="2025-02-13T15:18:16.646644178Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:18:16.647965 containerd[1530]: time="2025-02-13T15:18:16.647660188Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:16.648960 containerd[1530]: time="2025-02-13T15:18:16.648919834Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:16.649743 containerd[1530]: time="2025-02-13T15:18:16.649691968Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 15:18:16.654275 containerd[1530]: time="2025-02-13T15:18:16.654237362Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:16.656569 containerd[1530]: time="2025-02-13T15:18:16.656534171Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:18:16.657339 containerd[1530]: time="2025-02-13T15:18:16.657283753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:18:16.658674 containerd[1530]: time="2025-02-13T15:18:16.658638006Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 469.577203ms" Feb 13 15:18:16.661828 containerd[1530]: time="2025-02-13T15:18:16.661780604Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 473.498853ms" Feb 13 15:18:16.662760 containerd[1530]: time="2025-02-13T15:18:16.662536544Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 474.22992ms" Feb 13 15:18:16.785336 kubelet[2325]: W0213 15:18:16.785252 2325 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Feb 13 15:18:16.785336 kubelet[2325]: E0213 15:18:16.785322 2325 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Feb 13 15:18:16.870602 containerd[1530]: time="2025-02-13T15:18:16.870407817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:16.870602 containerd[1530]: time="2025-02-13T15:18:16.870512381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:16.870602 containerd[1530]: time="2025-02-13T15:18:16.870536733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:16.870859 containerd[1530]: time="2025-02-13T15:18:16.870617625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:16.870859 containerd[1530]: time="2025-02-13T15:18:16.870735385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:16.870945 containerd[1530]: time="2025-02-13T15:18:16.870795404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:16.870945 containerd[1530]: time="2025-02-13T15:18:16.870812718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:16.871051 containerd[1530]: time="2025-02-13T15:18:16.871014568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:16.871732 containerd[1530]: time="2025-02-13T15:18:16.870539292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:16.872011 containerd[1530]: time="2025-02-13T15:18:16.871869554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:16.872011 containerd[1530]: time="2025-02-13T15:18:16.871886388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:16.872528 containerd[1530]: time="2025-02-13T15:18:16.872467828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:16.920856 kubelet[2325]: W0213 15:18:16.920672 2325 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Feb 13 15:18:16.920856 kubelet[2325]: E0213 15:18:16.920744 2325 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Feb 13 15:18:16.933129 containerd[1530]: time="2025-02-13T15:18:16.932636741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d2a7a6ffe7973b741dbee045c49c1e53,Namespace:kube-system,Attempt:0,} returns sandbox id \"89c6889706305b8080aa67f822e9e735fd3290af3761d263a4370bb323cd65f8\"" Feb 13 15:18:16.933129 containerd[1530]: time="2025-02-13T15:18:16.932804363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecdb0583b5c6116f5bcdabff4e4992517a3ea9651106f4d636feca72023076ff\"" Feb 13 15:18:16.933628 containerd[1530]: time="2025-02-13T15:18:16.933099622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"e70b53662afc0984b938040ee23ed787f964689865217973bcc22b208d4c8005\"" Feb 13 15:18:16.934736 kubelet[2325]: E0213 15:18:16.934698 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:16.934974 kubelet[2325]: E0213 15:18:16.934840 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:16.934974 kubelet[2325]: E0213 15:18:16.934867 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:16.938407 containerd[1530]: time="2025-02-13T15:18:16.938229695Z" level=info msg="CreateContainer within sandbox \"ecdb0583b5c6116f5bcdabff4e4992517a3ea9651106f4d636feca72023076ff\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:18:16.938407 containerd[1530]: time="2025-02-13T15:18:16.938289754Z" level=info msg="CreateContainer within sandbox \"e70b53662afc0984b938040ee23ed787f964689865217973bcc22b208d4c8005\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:18:16.938765 containerd[1530]: time="2025-02-13T15:18:16.938591410Z" level=info msg="CreateContainer within sandbox \"89c6889706305b8080aa67f822e9e735fd3290af3761d263a4370bb323cd65f8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:18:16.959166 containerd[1530]: time="2025-02-13T15:18:16.959115100Z" level=info msg="CreateContainer within sandbox \"e70b53662afc0984b938040ee23ed787f964689865217973bcc22b208d4c8005\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6f68e68ba23111450b6532950848c2a533fab03eb3bc2e03716d17130a1f1f68\"" Feb 13 15:18:16.960094 containerd[1530]: time="2025-02-13T15:18:16.960064253Z" level=info msg="StartContainer for \"6f68e68ba23111450b6532950848c2a533fab03eb3bc2e03716d17130a1f1f68\"" Feb 13 15:18:16.970728 kubelet[2325]: E0213 15:18:16.970675 2325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="1.6s" Feb 13 15:18:16.974678 containerd[1530]: time="2025-02-13T15:18:16.974519754Z" level=info msg="CreateContainer within sandbox \"ecdb0583b5c6116f5bcdabff4e4992517a3ea9651106f4d636feca72023076ff\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e621e7246775200b31a455219f9017f2552ea34e8ce807057cb5a3bd251427c8\"" Feb 13 15:18:16.975519 containerd[1530]: time="2025-02-13T15:18:16.975486940Z" level=info msg="StartContainer for \"e621e7246775200b31a455219f9017f2552ea34e8ce807057cb5a3bd251427c8\"" Feb 13 15:18:16.978492 containerd[1530]: time="2025-02-13T15:18:16.978416091Z" level=info msg="CreateContainer within sandbox \"89c6889706305b8080aa67f822e9e735fd3290af3761d263a4370bb323cd65f8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ae9f5047666e1a98617e2d05333f88b84a14d7835cb8565f112df258732a68fb\"" Feb 13 15:18:16.981861 containerd[1530]: time="2025-02-13T15:18:16.980151893Z" level=info msg="StartContainer for \"ae9f5047666e1a98617e2d05333f88b84a14d7835cb8565f112df258732a68fb\"" Feb 13 15:18:17.026655 containerd[1530]: time="2025-02-13T15:18:17.026600395Z" level=info msg="StartContainer for \"6f68e68ba23111450b6532950848c2a533fab03eb3bc2e03716d17130a1f1f68\" returns successfully" Feb 13 15:18:17.033647 kubelet[2325]: W0213 15:18:17.031079 2325 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Feb 13 15:18:17.033647 kubelet[2325]: E0213 15:18:17.031144 2325 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Feb 13 15:18:17.059592 kubelet[2325]: W0213 15:18:17.059529 2325 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Feb 13 15:18:17.060054 kubelet[2325]: E0213 15:18:17.059870 2325 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused Feb 13 15:18:17.074212 kubelet[2325]: I0213 15:18:17.074097 2325 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:18:17.074715 kubelet[2325]: E0213 15:18:17.074668 2325 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Feb 13 15:18:17.083048 containerd[1530]: time="2025-02-13T15:18:17.083005940Z" level=info msg="StartContainer for \"ae9f5047666e1a98617e2d05333f88b84a14d7835cb8565f112df258732a68fb\" returns successfully" Feb 13 15:18:17.083449 containerd[1530]: time="2025-02-13T15:18:17.083181443Z" level=info msg="StartContainer for \"e621e7246775200b31a455219f9017f2552ea34e8ce807057cb5a3bd251427c8\" returns successfully" Feb 13 15:18:17.593761 kubelet[2325]: E0213 15:18:17.593641 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:17.597141 kubelet[2325]: E0213 15:18:17.597007 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:17.599768 kubelet[2325]: E0213 15:18:17.599692 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:18.607466 kubelet[2325]: E0213 15:18:18.603569 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:18.677207 kubelet[2325]: I0213 15:18:18.677176 2325 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:18:18.901554 kubelet[2325]: E0213 15:18:18.901423 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:19.219341 kubelet[2325]: E0213 15:18:19.219239 2325 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:18:19.269663 kubelet[2325]: I0213 15:18:19.269624 2325 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:18:19.286709 kubelet[2325]: E0213 15:18:19.286673 2325 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:18:19.387567 kubelet[2325]: E0213 15:18:19.387521 2325 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:18:19.474320 kubelet[2325]: E0213 15:18:19.474215 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:19.487621 kubelet[2325]: E0213 15:18:19.487588 2325 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:18:19.588633 kubelet[2325]: E0213 15:18:19.588592 2325 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:18:19.689173 kubelet[2325]: E0213 15:18:19.689126 2325 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:18:19.790270 kubelet[2325]: E0213 15:18:19.790151 2325 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:18:19.890853 kubelet[2325]: E0213 15:18:19.890810 2325 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:18:20.557662 kubelet[2325]: I0213 15:18:20.557601 2325 apiserver.go:52] "Watching apiserver" Feb 13 15:18:20.567644 kubelet[2325]: I0213 15:18:20.567605 2325 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:18:21.926164 systemd[1]: Reloading requested from client PID 2607 ('systemctl') (unit session-5.scope)... Feb 13 15:18:21.926178 systemd[1]: Reloading... Feb 13 15:18:21.985453 zram_generator::config[2649]: No configuration found. Feb 13 15:18:22.146986 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:18:22.202169 systemd[1]: Reloading finished in 275 ms. Feb 13 15:18:22.231941 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:22.240554 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:18:22.240865 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:22.250632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:18:22.334957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:18:22.338872 (kubelet)[2698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:18:22.383479 kubelet[2698]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:18:22.383479 kubelet[2698]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:18:22.383479 kubelet[2698]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:18:22.383827 kubelet[2698]: I0213 15:18:22.383515 2698 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:18:22.387332 kubelet[2698]: I0213 15:18:22.387271 2698 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:18:22.387332 kubelet[2698]: I0213 15:18:22.387298 2698 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:18:22.387562 kubelet[2698]: I0213 15:18:22.387479 2698 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:18:22.389303 kubelet[2698]: I0213 15:18:22.389268 2698 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:18:22.391118 kubelet[2698]: I0213 15:18:22.391030 2698 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:18:22.398073 kubelet[2698]: I0213 15:18:22.398054 2698 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:18:22.398500 kubelet[2698]: I0213 15:18:22.398454 2698 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:18:22.398638 kubelet[2698]: I0213 15:18:22.398616 2698 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:18:22.398793 kubelet[2698]: I0213 15:18:22.398641 2698 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:18:22.398793 kubelet[2698]: I0213 15:18:22.398650 2698 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:18:22.398793 kubelet[2698]: I0213 15:18:22.398689 2698 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:18:22.398793 kubelet[2698]: I0213 15:18:22.398779 2698 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:18:22.398793 kubelet[2698]: I0213 15:18:22.398792 2698 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:18:22.399002 kubelet[2698]: I0213 15:18:22.398812 2698 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:18:22.399002 kubelet[2698]: I0213 15:18:22.398822 2698 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:18:22.406435 kubelet[2698]: I0213 15:18:22.402769 2698 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:18:22.406435 kubelet[2698]: I0213 15:18:22.402994 2698 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:18:22.406435 kubelet[2698]: I0213 15:18:22.403374 2698 server.go:1256] "Started kubelet" Feb 13 15:18:22.406435 kubelet[2698]: I0213 15:18:22.405148 2698 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:18:22.406787 kubelet[2698]: I0213 15:18:22.406764 2698 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:18:22.407563 kubelet[2698]: I0213 15:18:22.407545 2698 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:18:22.408477 kubelet[2698]: I0213 15:18:22.408459 2698 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:18:22.408633 kubelet[2698]: I0213 15:18:22.408619 2698 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:18:22.409803 kubelet[2698]: I0213 15:18:22.409786 2698 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:18:22.409902 kubelet[2698]: I0213 15:18:22.409872 2698 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:18:22.410029 kubelet[2698]: I0213 15:18:22.410017 2698 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:18:22.411045 kubelet[2698]: I0213 15:18:22.411023 2698 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:18:22.411136 kubelet[2698]: I0213 15:18:22.411114 2698 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:18:22.411780 kubelet[2698]: E0213 15:18:22.411751 2698 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:18:22.411973 kubelet[2698]: I0213 15:18:22.411955 2698 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:18:22.427536 kubelet[2698]: I0213 15:18:22.427513 2698 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:18:22.430529 kubelet[2698]: I0213 15:18:22.430218 2698 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:18:22.430529 kubelet[2698]: I0213 15:18:22.430235 2698 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:18:22.430529 kubelet[2698]: I0213 15:18:22.430251 2698 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:18:22.430529 kubelet[2698]: E0213 15:18:22.430300 2698 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:18:22.457479 kubelet[2698]: I0213 15:18:22.457164 2698 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:18:22.457574 kubelet[2698]: I0213 15:18:22.457476 2698 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:18:22.457739 kubelet[2698]: I0213 15:18:22.457700 2698 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:18:22.458184 kubelet[2698]: I0213 15:18:22.458146 2698 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:18:22.458357 kubelet[2698]: I0213 15:18:22.458263 2698 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:18:22.458357 kubelet[2698]: I0213 15:18:22.458349 2698 policy_none.go:49] "None policy: Start" Feb 13 15:18:22.459709 kubelet[2698]: I0213 15:18:22.459686 2698 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:18:22.459770 kubelet[2698]: I0213 15:18:22.459722 2698 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:18:22.459921 kubelet[2698]: I0213 15:18:22.459906 2698 state_mem.go:75] "Updated machine memory state" Feb 13 15:18:22.461917 kubelet[2698]: I0213 15:18:22.461096 2698 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:18:22.461917 kubelet[2698]: I0213 15:18:22.461339 2698 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:18:22.513407 kubelet[2698]: I0213 15:18:22.513368 2698 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:18:22.530435 kubelet[2698]: I0213 15:18:22.530391 2698 topology_manager.go:215] "Topology Admit Handler" podUID="d2a7a6ffe7973b741dbee045c49c1e53" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:18:22.530508 kubelet[2698]: I0213 15:18:22.530493 2698 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:18:22.530553 kubelet[2698]: I0213 15:18:22.530545 2698 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:18:22.632729 kubelet[2698]: I0213 15:18:22.632691 2698 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:18:22.632869 kubelet[2698]: I0213 15:18:22.632790 2698 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:18:22.711222 kubelet[2698]: I0213 15:18:22.711103 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2a7a6ffe7973b741dbee045c49c1e53-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d2a7a6ffe7973b741dbee045c49c1e53\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:18:22.711222 kubelet[2698]: I0213 15:18:22.711185 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:22.711222 kubelet[2698]: I0213 15:18:22.711219 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:22.711353 kubelet[2698]: I0213 15:18:22.711254 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2a7a6ffe7973b741dbee045c49c1e53-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d2a7a6ffe7973b741dbee045c49c1e53\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:18:22.711353 kubelet[2698]: I0213 15:18:22.711276 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2a7a6ffe7973b741dbee045c49c1e53-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d2a7a6ffe7973b741dbee045c49c1e53\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:18:22.711353 kubelet[2698]: I0213 15:18:22.711305 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:22.711353 kubelet[2698]: I0213 15:18:22.711322 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:22.711353 kubelet[2698]: I0213 15:18:22.711341 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:18:22.711496 kubelet[2698]: I0213 15:18:22.711364 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:18:22.923610 kubelet[2698]: E0213 15:18:22.923528 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:22.925433 kubelet[2698]: E0213 15:18:22.925396 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:22.925588 kubelet[2698]: E0213 15:18:22.925551 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:23.399866 kubelet[2698]: I0213 15:18:23.399822 2698 apiserver.go:52] "Watching apiserver" Feb 13 15:18:23.410594 kubelet[2698]: I0213 15:18:23.410549 2698 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:18:23.440796 kubelet[2698]: E0213 15:18:23.440766 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:23.441452 kubelet[2698]: E0213 15:18:23.441412 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:23.442477 kubelet[2698]: E0213 15:18:23.441629 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:23.483455 kubelet[2698]: I0213 15:18:23.482485 2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.48241932 podStartE2EDuration="1.48241932s" podCreationTimestamp="2025-02-13 15:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:23.467400373 +0000 UTC m=+1.125081089" watchObservedRunningTime="2025-02-13 15:18:23.48241932 +0000 UTC m=+1.140100036" Feb 13 15:18:23.499085 kubelet[2698]: I0213 15:18:23.498939 2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.498898467 podStartE2EDuration="1.498898467s" podCreationTimestamp="2025-02-13 15:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:23.48460716 +0000 UTC m=+1.142287916" watchObservedRunningTime="2025-02-13 15:18:23.498898467 +0000 UTC m=+1.156579183" Feb 13 15:18:23.499085 kubelet[2698]: I0213 15:18:23.499011 2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.498996486 podStartE2EDuration="1.498996486s" podCreationTimestamp="2025-02-13 15:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:23.498857156 +0000 UTC m=+1.156537872" watchObservedRunningTime="2025-02-13 15:18:23.498996486 +0000 UTC m=+1.156677202" Feb 13 15:18:23.531491 sudo[1692]: pam_unix(sudo:session): session closed for user root Feb 13 15:18:23.532754 sshd[1691]: Connection closed by 10.0.0.1 port 45718 Feb 13 15:18:23.533188 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:23.536778 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:45718.service: Deactivated successfully. Feb 13 15:18:23.538675 systemd-logind[1515]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:18:23.538681 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:18:23.540046 systemd-logind[1515]: Removed session 5. Feb 13 15:18:24.444270 kubelet[2698]: E0213 15:18:24.444234 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:25.446025 kubelet[2698]: E0213 15:18:25.445699 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:26.001332 kubelet[2698]: E0213 15:18:26.001295 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:30.662086 kubelet[2698]: E0213 15:18:30.662016 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:31.454757 kubelet[2698]: E0213 15:18:31.454721 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:34.099064 kubelet[2698]: E0213 15:18:34.098979 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:34.459931 kubelet[2698]: E0213 15:18:34.459813 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:36.009977 kubelet[2698]: E0213 15:18:36.009943 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:36.365257 kubelet[2698]: I0213 15:18:36.365098 2698 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:18:36.365477 containerd[1530]: time="2025-02-13T15:18:36.365419641Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:18:36.368807 kubelet[2698]: I0213 15:18:36.368780 2698 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:18:36.934018 kubelet[2698]: I0213 15:18:36.933981 2698 topology_manager.go:215] "Topology Admit Handler" podUID="1febdd6f-d674-4b63-838f-8702c4c70432" podNamespace="kube-system" podName="kube-proxy-gpsrh" Feb 13 15:18:36.941160 kubelet[2698]: I0213 15:18:36.940648 2698 topology_manager.go:215] "Topology Admit Handler" podUID="c27b2f6e-f7c1-497a-b61a-7d77d00828be" podNamespace="kube-flannel" podName="kube-flannel-ds-89k6z" Feb 13 15:18:37.107490 kubelet[2698]: I0213 15:18:37.107389 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/c27b2f6e-f7c1-497a-b61a-7d77d00828be-flannel-cfg\") pod \"kube-flannel-ds-89k6z\" (UID: \"c27b2f6e-f7c1-497a-b61a-7d77d00828be\") " pod="kube-flannel/kube-flannel-ds-89k6z" Feb 13 15:18:37.107490 kubelet[2698]: I0213 15:18:37.107455 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxnpb\" (UniqueName: \"kubernetes.io/projected/c27b2f6e-f7c1-497a-b61a-7d77d00828be-kube-api-access-nxnpb\") pod \"kube-flannel-ds-89k6z\" (UID: \"c27b2f6e-f7c1-497a-b61a-7d77d00828be\") " pod="kube-flannel/kube-flannel-ds-89k6z" Feb 13 15:18:37.107490 kubelet[2698]: I0213 15:18:37.107483 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1febdd6f-d674-4b63-838f-8702c4c70432-kube-proxy\") pod \"kube-proxy-gpsrh\" (UID: \"1febdd6f-d674-4b63-838f-8702c4c70432\") " pod="kube-system/kube-proxy-gpsrh" Feb 13 15:18:37.107490 kubelet[2698]: I0213 15:18:37.107506 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1febdd6f-d674-4b63-838f-8702c4c70432-xtables-lock\") pod \"kube-proxy-gpsrh\" (UID: \"1febdd6f-d674-4b63-838f-8702c4c70432\") " pod="kube-system/kube-proxy-gpsrh" Feb 13 15:18:37.107950 kubelet[2698]: I0213 15:18:37.107535 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1febdd6f-d674-4b63-838f-8702c4c70432-lib-modules\") pod \"kube-proxy-gpsrh\" (UID: \"1febdd6f-d674-4b63-838f-8702c4c70432\") " pod="kube-system/kube-proxy-gpsrh" Feb 13 15:18:37.107950 kubelet[2698]: I0213 15:18:37.107562 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wxrb\" (UniqueName: \"kubernetes.io/projected/1febdd6f-d674-4b63-838f-8702c4c70432-kube-api-access-4wxrb\") pod \"kube-proxy-gpsrh\" (UID: \"1febdd6f-d674-4b63-838f-8702c4c70432\") " pod="kube-system/kube-proxy-gpsrh" Feb 13 15:18:37.107950 kubelet[2698]: I0213 15:18:37.107591 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c27b2f6e-f7c1-497a-b61a-7d77d00828be-run\") pod \"kube-flannel-ds-89k6z\" (UID: \"c27b2f6e-f7c1-497a-b61a-7d77d00828be\") " pod="kube-flannel/kube-flannel-ds-89k6z" Feb 13 15:18:37.107950 kubelet[2698]: I0213 15:18:37.107609 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/c27b2f6e-f7c1-497a-b61a-7d77d00828be-cni-plugin\") pod \"kube-flannel-ds-89k6z\" (UID: \"c27b2f6e-f7c1-497a-b61a-7d77d00828be\") " pod="kube-flannel/kube-flannel-ds-89k6z" Feb 13 15:18:37.107950 kubelet[2698]: I0213 15:18:37.107626 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/c27b2f6e-f7c1-497a-b61a-7d77d00828be-cni\") pod \"kube-flannel-ds-89k6z\" (UID: \"c27b2f6e-f7c1-497a-b61a-7d77d00828be\") " pod="kube-flannel/kube-flannel-ds-89k6z" Feb 13 15:18:37.108053 kubelet[2698]: I0213 15:18:37.107645 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c27b2f6e-f7c1-497a-b61a-7d77d00828be-xtables-lock\") pod \"kube-flannel-ds-89k6z\" (UID: \"c27b2f6e-f7c1-497a-b61a-7d77d00828be\") " pod="kube-flannel/kube-flannel-ds-89k6z" Feb 13 15:18:37.216787 kubelet[2698]: E0213 15:18:37.216685 2698 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 15:18:37.216787 kubelet[2698]: E0213 15:18:37.216717 2698 projected.go:200] Error preparing data for projected volume kube-api-access-nxnpb for pod kube-flannel/kube-flannel-ds-89k6z: configmap "kube-root-ca.crt" not found Feb 13 15:18:37.216787 kubelet[2698]: E0213 15:18:37.216774 2698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c27b2f6e-f7c1-497a-b61a-7d77d00828be-kube-api-access-nxnpb podName:c27b2f6e-f7c1-497a-b61a-7d77d00828be nodeName:}" failed. No retries permitted until 2025-02-13 15:18:37.716754515 +0000 UTC m=+15.374435231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nxnpb" (UniqueName: "kubernetes.io/projected/c27b2f6e-f7c1-497a-b61a-7d77d00828be-kube-api-access-nxnpb") pod "kube-flannel-ds-89k6z" (UID: "c27b2f6e-f7c1-497a-b61a-7d77d00828be") : configmap "kube-root-ca.crt" not found Feb 13 15:18:37.217365 kubelet[2698]: E0213 15:18:37.217348 2698 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 15:18:37.217398 kubelet[2698]: E0213 15:18:37.217374 2698 projected.go:200] Error preparing data for projected volume kube-api-access-4wxrb for pod kube-system/kube-proxy-gpsrh: configmap "kube-root-ca.crt" not found Feb 13 15:18:37.217459 kubelet[2698]: E0213 15:18:37.217440 2698 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1febdd6f-d674-4b63-838f-8702c4c70432-kube-api-access-4wxrb podName:1febdd6f-d674-4b63-838f-8702c4c70432 nodeName:}" failed. No retries permitted until 2025-02-13 15:18:37.717408977 +0000 UTC m=+15.375089693 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4wxrb" (UniqueName: "kubernetes.io/projected/1febdd6f-d674-4b63-838f-8702c4c70432-kube-api-access-4wxrb") pod "kube-proxy-gpsrh" (UID: "1febdd6f-d674-4b63-838f-8702c4c70432") : configmap "kube-root-ca.crt" not found Feb 13 15:18:37.839420 kubelet[2698]: E0213 15:18:37.836807 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:37.839590 containerd[1530]: time="2025-02-13T15:18:37.838760757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gpsrh,Uid:1febdd6f-d674-4b63-838f-8702c4c70432,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:37.848283 kubelet[2698]: E0213 15:18:37.847973 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:37.848989 containerd[1530]: time="2025-02-13T15:18:37.848940733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-89k6z,Uid:c27b2f6e-f7c1-497a-b61a-7d77d00828be,Namespace:kube-flannel,Attempt:0,}" Feb 13 15:18:38.105378 containerd[1530]: time="2025-02-13T15:18:38.105024726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:38.105378 containerd[1530]: time="2025-02-13T15:18:38.105088681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:38.105378 containerd[1530]: time="2025-02-13T15:18:38.105115279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:38.105378 containerd[1530]: time="2025-02-13T15:18:38.105276065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:38.113588 update_engine[1524]: I20250213 15:18:38.113534 1524 update_attempter.cc:509] Updating boot flags... Feb 13 15:18:38.141801 containerd[1530]: time="2025-02-13T15:18:38.141708992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:38.141801 containerd[1530]: time="2025-02-13T15:18:38.141759188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:38.141801 containerd[1530]: time="2025-02-13T15:18:38.141769707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:38.142577 containerd[1530]: time="2025-02-13T15:18:38.141839581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:38.152267 containerd[1530]: time="2025-02-13T15:18:38.152204718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gpsrh,Uid:1febdd6f-d674-4b63-838f-8702c4c70432,Namespace:kube-system,Attempt:0,} returns sandbox id \"11f0aa0afed9afe200349111a926902d370b7b33cef6ea3a6b3a3e4faeae3d69\"" Feb 13 15:18:38.153347 kubelet[2698]: E0213 15:18:38.153323 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:38.157138 containerd[1530]: time="2025-02-13T15:18:38.157100271Z" level=info msg="CreateContainer within sandbox \"11f0aa0afed9afe200349111a926902d370b7b33cef6ea3a6b3a3e4faeae3d69\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:18:38.196664 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2846) Feb 13 15:18:38.203668 containerd[1530]: time="2025-02-13T15:18:38.203620438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-89k6z,Uid:c27b2f6e-f7c1-497a-b61a-7d77d00828be,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"656a0dd47bbcfd292ca7ed939dd0fae97d8ce57d64b05dc14ca4c4333006d562\"" Feb 13 15:18:38.204444 kubelet[2698]: E0213 15:18:38.204402 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:38.205591 containerd[1530]: time="2025-02-13T15:18:38.205525439Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 15:18:38.270707 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2846) Feb 13 15:18:38.297463 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2846) Feb 13 15:18:38.326622 containerd[1530]: time="2025-02-13T15:18:38.326579361Z" level=info msg="CreateContainer within sandbox \"11f0aa0afed9afe200349111a926902d370b7b33cef6ea3a6b3a3e4faeae3d69\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7972baa7763a860e155f69399d2328c932920c6d2380626f7423aa711a7aa45e\"" Feb 13 15:18:38.327200 containerd[1530]: time="2025-02-13T15:18:38.327159912Z" level=info msg="StartContainer for \"7972baa7763a860e155f69399d2328c932920c6d2380626f7423aa711a7aa45e\"" Feb 13 15:18:38.387109 containerd[1530]: time="2025-02-13T15:18:38.386996931Z" level=info msg="StartContainer for \"7972baa7763a860e155f69399d2328c932920c6d2380626f7423aa711a7aa45e\" returns successfully" Feb 13 15:18:38.473401 kubelet[2698]: E0213 15:18:38.469120 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:38.489968 kubelet[2698]: I0213 15:18:38.489928 2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gpsrh" podStartSLOduration=2.489883925 podStartE2EDuration="2.489883925s" podCreationTimestamp="2025-02-13 15:18:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:38.488161548 +0000 UTC m=+16.145842224" watchObservedRunningTime="2025-02-13 15:18:38.489883925 +0000 UTC m=+16.147564641" Feb 13 15:18:39.572955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2099951187.mount: Deactivated successfully. Feb 13 15:18:39.597072 containerd[1530]: time="2025-02-13T15:18:39.597022252Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:39.597983 containerd[1530]: time="2025-02-13T15:18:39.597478536Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 15:18:39.598506 containerd[1530]: time="2025-02-13T15:18:39.598478458Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:39.601219 containerd[1530]: time="2025-02-13T15:18:39.600833394Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:39.601946 containerd[1530]: time="2025-02-13T15:18:39.601813358Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.396215405s" Feb 13 15:18:39.601946 containerd[1530]: time="2025-02-13T15:18:39.601850795Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 15:18:39.603663 containerd[1530]: time="2025-02-13T15:18:39.603499786Z" level=info msg="CreateContainer within sandbox \"656a0dd47bbcfd292ca7ed939dd0fae97d8ce57d64b05dc14ca4c4333006d562\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 15:18:39.612417 containerd[1530]: time="2025-02-13T15:18:39.611922969Z" level=info msg="CreateContainer within sandbox \"656a0dd47bbcfd292ca7ed939dd0fae97d8ce57d64b05dc14ca4c4333006d562\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"d16dde555d788ddc38de663b8c3bce7d31ec23e50aa468d8b101bd576b8ab4fe\"" Feb 13 15:18:39.612617 containerd[1530]: time="2025-02-13T15:18:39.612583357Z" level=info msg="StartContainer for \"d16dde555d788ddc38de663b8c3bce7d31ec23e50aa468d8b101bd576b8ab4fe\"" Feb 13 15:18:39.613826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount381739893.mount: Deactivated successfully. Feb 13 15:18:39.658955 containerd[1530]: time="2025-02-13T15:18:39.658901702Z" level=info msg="StartContainer for \"d16dde555d788ddc38de663b8c3bce7d31ec23e50aa468d8b101bd576b8ab4fe\" returns successfully" Feb 13 15:18:39.704843 containerd[1530]: time="2025-02-13T15:18:39.698883501Z" level=info msg="shim disconnected" id=d16dde555d788ddc38de663b8c3bce7d31ec23e50aa468d8b101bd576b8ab4fe namespace=k8s.io Feb 13 15:18:39.704843 containerd[1530]: time="2025-02-13T15:18:39.704839476Z" level=warning msg="cleaning up after shim disconnected" id=d16dde555d788ddc38de663b8c3bce7d31ec23e50aa468d8b101bd576b8ab4fe namespace=k8s.io Feb 13 15:18:39.704843 containerd[1530]: time="2025-02-13T15:18:39.704852235Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:40.476746 kubelet[2698]: E0213 15:18:40.476706 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:40.477839 containerd[1530]: time="2025-02-13T15:18:40.477804388Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 15:18:41.818248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4022668325.mount: Deactivated successfully. Feb 13 15:18:42.410855 containerd[1530]: time="2025-02-13T15:18:42.410797507Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:42.412039 containerd[1530]: time="2025-02-13T15:18:42.411968831Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Feb 13 15:18:42.412944 containerd[1530]: time="2025-02-13T15:18:42.412900492Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:42.416313 containerd[1530]: time="2025-02-13T15:18:42.416269395Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:42.417589 containerd[1530]: time="2025-02-13T15:18:42.417551672Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.939703888s" Feb 13 15:18:42.417589 containerd[1530]: time="2025-02-13T15:18:42.417586990Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 15:18:42.436285 containerd[1530]: time="2025-02-13T15:18:42.436108719Z" level=info msg="CreateContainer within sandbox \"656a0dd47bbcfd292ca7ed939dd0fae97d8ce57d64b05dc14ca4c4333006d562\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:18:42.451772 containerd[1530]: time="2025-02-13T15:18:42.451724435Z" level=info msg="CreateContainer within sandbox \"656a0dd47bbcfd292ca7ed939dd0fae97d8ce57d64b05dc14ca4c4333006d562\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7be7b87cd03351770d6ad011c3fcecba2c05c8f75e71191708516f57a25cacd0\"" Feb 13 15:18:42.452236 containerd[1530]: time="2025-02-13T15:18:42.452206604Z" level=info msg="StartContainer for \"7be7b87cd03351770d6ad011c3fcecba2c05c8f75e71191708516f57a25cacd0\"" Feb 13 15:18:42.522737 containerd[1530]: time="2025-02-13T15:18:42.522678312Z" level=info msg="StartContainer for \"7be7b87cd03351770d6ad011c3fcecba2c05c8f75e71191708516f57a25cacd0\" returns successfully" Feb 13 15:18:42.542387 containerd[1530]: time="2025-02-13T15:18:42.542303489Z" level=info msg="shim disconnected" id=7be7b87cd03351770d6ad011c3fcecba2c05c8f75e71191708516f57a25cacd0 namespace=k8s.io Feb 13 15:18:42.542387 containerd[1530]: time="2025-02-13T15:18:42.542398643Z" level=warning msg="cleaning up after shim disconnected" id=7be7b87cd03351770d6ad011c3fcecba2c05c8f75e71191708516f57a25cacd0 namespace=k8s.io Feb 13 15:18:42.542387 containerd[1530]: time="2025-02-13T15:18:42.542410203Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:42.551789 containerd[1530]: time="2025-02-13T15:18:42.551740922Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:18:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:18:42.627013 kubelet[2698]: I0213 15:18:42.626971 2698 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:18:42.661316 kubelet[2698]: I0213 15:18:42.661047 2698 topology_manager.go:215] "Topology Admit Handler" podUID="62df62ac-88fa-472d-bccc-430f984cd97c" podNamespace="kube-system" podName="coredns-76f75df574-c4cql" Feb 13 15:18:42.661316 kubelet[2698]: I0213 15:18:42.661207 2698 topology_manager.go:215] "Topology Admit Handler" podUID="2dbbdd0c-4d9f-4481-8c2d-3c9a03235138" podNamespace="kube-system" podName="coredns-76f75df574-pgwlz" Feb 13 15:18:42.737507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7be7b87cd03351770d6ad011c3fcecba2c05c8f75e71191708516f57a25cacd0-rootfs.mount: Deactivated successfully. Feb 13 15:18:42.764157 kubelet[2698]: I0213 15:18:42.764118 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dbbdd0c-4d9f-4481-8c2d-3c9a03235138-config-volume\") pod \"coredns-76f75df574-pgwlz\" (UID: \"2dbbdd0c-4d9f-4481-8c2d-3c9a03235138\") " pod="kube-system/coredns-76f75df574-pgwlz" Feb 13 15:18:42.764157 kubelet[2698]: I0213 15:18:42.764171 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62df62ac-88fa-472d-bccc-430f984cd97c-config-volume\") pod \"coredns-76f75df574-c4cql\" (UID: \"62df62ac-88fa-472d-bccc-430f984cd97c\") " pod="kube-system/coredns-76f75df574-c4cql" Feb 13 15:18:42.764302 kubelet[2698]: I0213 15:18:42.764195 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t5xs\" (UniqueName: \"kubernetes.io/projected/62df62ac-88fa-472d-bccc-430f984cd97c-kube-api-access-4t5xs\") pod \"coredns-76f75df574-c4cql\" (UID: \"62df62ac-88fa-472d-bccc-430f984cd97c\") " pod="kube-system/coredns-76f75df574-c4cql" Feb 13 15:18:42.764302 kubelet[2698]: I0213 15:18:42.764220 2698 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhlwb\" (UniqueName: \"kubernetes.io/projected/2dbbdd0c-4d9f-4481-8c2d-3c9a03235138-kube-api-access-nhlwb\") pod \"coredns-76f75df574-pgwlz\" (UID: \"2dbbdd0c-4d9f-4481-8c2d-3c9a03235138\") " pod="kube-system/coredns-76f75df574-pgwlz" Feb 13 15:18:42.965520 kubelet[2698]: E0213 15:18:42.965368 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:42.966180 containerd[1530]: time="2025-02-13T15:18:42.966143752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c4cql,Uid:62df62ac-88fa-472d-bccc-430f984cd97c,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:42.966271 kubelet[2698]: E0213 15:18:42.966224 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:42.966969 containerd[1530]: time="2025-02-13T15:18:42.966860426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pgwlz,Uid:2dbbdd0c-4d9f-4481-8c2d-3c9a03235138,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:43.045515 systemd[1]: run-netns-cni\x2d0a388b5d\x2d5d31\x2d3de0\x2d9f3b\x2d5aae1e8d158c.mount: Deactivated successfully. Feb 13 15:18:43.045670 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f468b98dee897ad0a8856f71599b729173256428719e7ccac1dd24bc8a80b51-shm.mount: Deactivated successfully. Feb 13 15:18:43.047457 containerd[1530]: time="2025-02-13T15:18:43.047192723Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pgwlz,Uid:2dbbdd0c-4d9f-4481-8c2d-3c9a03235138,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f468b98dee897ad0a8856f71599b729173256428719e7ccac1dd24bc8a80b51\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:18:43.047647 kubelet[2698]: E0213 15:18:43.047623 2698 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f468b98dee897ad0a8856f71599b729173256428719e7ccac1dd24bc8a80b51\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:18:43.047697 kubelet[2698]: E0213 15:18:43.047683 2698 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f468b98dee897ad0a8856f71599b729173256428719e7ccac1dd24bc8a80b51\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-pgwlz" Feb 13 15:18:43.047724 kubelet[2698]: E0213 15:18:43.047702 2698 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f468b98dee897ad0a8856f71599b729173256428719e7ccac1dd24bc8a80b51\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-pgwlz" Feb 13 15:18:43.047781 kubelet[2698]: E0213 15:18:43.047762 2698 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-pgwlz_kube-system(2dbbdd0c-4d9f-4481-8c2d-3c9a03235138)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-pgwlz_kube-system(2dbbdd0c-4d9f-4481-8c2d-3c9a03235138)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f468b98dee897ad0a8856f71599b729173256428719e7ccac1dd24bc8a80b51\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-pgwlz" podUID="2dbbdd0c-4d9f-4481-8c2d-3c9a03235138" Feb 13 15:18:43.050015 containerd[1530]: time="2025-02-13T15:18:43.049948477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c4cql,Uid:62df62ac-88fa-472d-bccc-430f984cd97c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7cafcfbff0ae15797b41db881ae0f2f6fb9b5723f166d27c75901128edacfbf7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:18:43.050213 kubelet[2698]: E0213 15:18:43.050184 2698 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cafcfbff0ae15797b41db881ae0f2f6fb9b5723f166d27c75901128edacfbf7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:18:43.050309 kubelet[2698]: E0213 15:18:43.050234 2698 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cafcfbff0ae15797b41db881ae0f2f6fb9b5723f166d27c75901128edacfbf7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-c4cql" Feb 13 15:18:43.050309 kubelet[2698]: E0213 15:18:43.050253 2698 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cafcfbff0ae15797b41db881ae0f2f6fb9b5723f166d27c75901128edacfbf7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-c4cql" Feb 13 15:18:43.050309 kubelet[2698]: E0213 15:18:43.050295 2698 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-c4cql_kube-system(62df62ac-88fa-472d-bccc-430f984cd97c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-c4cql_kube-system(62df62ac-88fa-472d-bccc-430f984cd97c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cafcfbff0ae15797b41db881ae0f2f6fb9b5723f166d27c75901128edacfbf7\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-c4cql" podUID="62df62ac-88fa-472d-bccc-430f984cd97c" Feb 13 15:18:43.495828 kubelet[2698]: E0213 15:18:43.495798 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:43.498132 containerd[1530]: time="2025-02-13T15:18:43.498076858Z" level=info msg="CreateContainer within sandbox \"656a0dd47bbcfd292ca7ed939dd0fae97d8ce57d64b05dc14ca4c4333006d562\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 15:18:43.517483 containerd[1530]: time="2025-02-13T15:18:43.517027316Z" level=info msg="CreateContainer within sandbox \"656a0dd47bbcfd292ca7ed939dd0fae97d8ce57d64b05dc14ca4c4333006d562\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"fe9344d5ee35ecd1d2b3c5fa2911214be9ec266d9f6900219e9d96128f705752\"" Feb 13 15:18:43.522876 containerd[1530]: time="2025-02-13T15:18:43.522818727Z" level=info msg="StartContainer for \"fe9344d5ee35ecd1d2b3c5fa2911214be9ec266d9f6900219e9d96128f705752\"" Feb 13 15:18:43.572992 containerd[1530]: time="2025-02-13T15:18:43.572935785Z" level=info msg="StartContainer for \"fe9344d5ee35ecd1d2b3c5fa2911214be9ec266d9f6900219e9d96128f705752\" returns successfully" Feb 13 15:18:43.738397 systemd[1]: run-netns-cni\x2d51938437\x2d6089\x2de246\x2dfacf\x2dc37431655fac.mount: Deactivated successfully. Feb 13 15:18:43.738575 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7cafcfbff0ae15797b41db881ae0f2f6fb9b5723f166d27c75901128edacfbf7-shm.mount: Deactivated successfully. Feb 13 15:18:44.500460 kubelet[2698]: E0213 15:18:44.499315 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:44.699719 systemd-networkd[1223]: flannel.1: Link UP Feb 13 15:18:44.699726 systemd-networkd[1223]: flannel.1: Gained carrier Feb 13 15:18:45.500897 kubelet[2698]: E0213 15:18:45.500870 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:45.809583 systemd-networkd[1223]: flannel.1: Gained IPv6LL Feb 13 15:18:46.994380 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:50534.service - OpenSSH per-connection server daemon (10.0.0.1:50534). Feb 13 15:18:47.037670 sshd[3342]: Accepted publickey for core from 10.0.0.1 port 50534 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:47.039115 sshd-session[3342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:47.042935 systemd-logind[1515]: New session 6 of user core. Feb 13 15:18:47.060829 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:18:47.185376 sshd[3345]: Connection closed by 10.0.0.1 port 50534 Feb 13 15:18:47.185833 sshd-session[3342]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:47.189181 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:50534.service: Deactivated successfully. Feb 13 15:18:47.192338 systemd-logind[1515]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:18:47.193075 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:18:47.195307 systemd-logind[1515]: Removed session 6. Feb 13 15:18:52.195681 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:50544.service - OpenSSH per-connection server daemon (10.0.0.1:50544). Feb 13 15:18:52.230999 sshd[3379]: Accepted publickey for core from 10.0.0.1 port 50544 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:52.232166 sshd-session[3379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:52.236384 systemd-logind[1515]: New session 7 of user core. Feb 13 15:18:52.251714 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:18:52.365847 sshd[3382]: Connection closed by 10.0.0.1 port 50544 Feb 13 15:18:52.365730 sshd-session[3379]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:52.369298 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:50544.service: Deactivated successfully. Feb 13 15:18:52.371309 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:18:52.371341 systemd-logind[1515]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:18:52.374267 systemd-logind[1515]: Removed session 7. Feb 13 15:18:54.431216 kubelet[2698]: E0213 15:18:54.431172 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:54.432690 containerd[1530]: time="2025-02-13T15:18:54.432566331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c4cql,Uid:62df62ac-88fa-472d-bccc-430f984cd97c,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:54.487845 systemd-networkd[1223]: cni0: Link UP Feb 13 15:18:54.487851 systemd-networkd[1223]: cni0: Gained carrier Feb 13 15:18:54.491470 systemd-networkd[1223]: cni0: Lost carrier Feb 13 15:18:54.496561 systemd-networkd[1223]: veth41a80ecf: Link UP Feb 13 15:18:54.497597 kernel: cni0: port 1(veth41a80ecf) entered blocking state Feb 13 15:18:54.497680 kernel: cni0: port 1(veth41a80ecf) entered disabled state Feb 13 15:18:54.497704 kernel: veth41a80ecf: entered allmulticast mode Feb 13 15:18:54.498578 kernel: veth41a80ecf: entered promiscuous mode Feb 13 15:18:54.499746 kernel: cni0: port 1(veth41a80ecf) entered blocking state Feb 13 15:18:54.499784 kernel: cni0: port 1(veth41a80ecf) entered forwarding state Feb 13 15:18:54.501625 kernel: cni0: port 1(veth41a80ecf) entered disabled state Feb 13 15:18:54.510667 kernel: cni0: port 1(veth41a80ecf) entered blocking state Feb 13 15:18:54.510743 kernel: cni0: port 1(veth41a80ecf) entered forwarding state Feb 13 15:18:54.510136 systemd-networkd[1223]: veth41a80ecf: Gained carrier Feb 13 15:18:54.510554 systemd-networkd[1223]: cni0: Gained carrier Feb 13 15:18:54.512961 containerd[1530]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014938), "name":"cbr0", "type":"bridge"} Feb 13 15:18:54.512961 containerd[1530]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:18:54.537043 containerd[1530]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T15:18:54.536286616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:54.537043 containerd[1530]: time="2025-02-13T15:18:54.536844240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:54.537043 containerd[1530]: time="2025-02-13T15:18:54.536867319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:54.537043 containerd[1530]: time="2025-02-13T15:18:54.536970716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:54.559672 systemd-resolved[1429]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:18:54.578043 containerd[1530]: time="2025-02-13T15:18:54.577964981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c4cql,Uid:62df62ac-88fa-472d-bccc-430f984cd97c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b8bb10b2219b447b2ac174b345fec18f4c461d210ba85e040ee751b9ca6d33c\"" Feb 13 15:18:54.578673 kubelet[2698]: E0213 15:18:54.578654 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:54.580338 containerd[1530]: time="2025-02-13T15:18:54.580280392Z" level=info msg="CreateContainer within sandbox \"1b8bb10b2219b447b2ac174b345fec18f4c461d210ba85e040ee751b9ca6d33c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:18:54.592548 containerd[1530]: time="2025-02-13T15:18:54.592492950Z" level=info msg="CreateContainer within sandbox \"1b8bb10b2219b447b2ac174b345fec18f4c461d210ba85e040ee751b9ca6d33c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69b50ae3bdfcb44cc86a3e4cf276eb3da37295d31d9931525cc2fdcc297e2134\"" Feb 13 15:18:54.593154 containerd[1530]: time="2025-02-13T15:18:54.593087253Z" level=info msg="StartContainer for \"69b50ae3bdfcb44cc86a3e4cf276eb3da37295d31d9931525cc2fdcc297e2134\"" Feb 13 15:18:54.648113 containerd[1530]: time="2025-02-13T15:18:54.648062823Z" level=info msg="StartContainer for \"69b50ae3bdfcb44cc86a3e4cf276eb3da37295d31d9931525cc2fdcc297e2134\" returns successfully" Feb 13 15:18:55.522240 kubelet[2698]: E0213 15:18:55.522205 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:55.533845 kubelet[2698]: I0213 15:18:55.533644 2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-89k6z" podStartSLOduration=15.320669758 podStartE2EDuration="19.533594801s" podCreationTimestamp="2025-02-13 15:18:36 +0000 UTC" firstStartedPulling="2025-02-13 15:18:38.204892052 +0000 UTC m=+15.862572768" lastFinishedPulling="2025-02-13 15:18:42.417817095 +0000 UTC m=+20.075497811" observedRunningTime="2025-02-13 15:18:44.510177357 +0000 UTC m=+22.167858073" watchObservedRunningTime="2025-02-13 15:18:55.533594801 +0000 UTC m=+33.191275517" Feb 13 15:18:55.533845 kubelet[2698]: I0213 15:18:55.533758 2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-c4cql" podStartSLOduration=18.533740437 podStartE2EDuration="18.533740437s" podCreationTimestamp="2025-02-13 15:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:55.533555082 +0000 UTC m=+33.191235798" watchObservedRunningTime="2025-02-13 15:18:55.533740437 +0000 UTC m=+33.191421233" Feb 13 15:18:55.537770 systemd-networkd[1223]: cni0: Gained IPv6LL Feb 13 15:18:55.793632 systemd-networkd[1223]: veth41a80ecf: Gained IPv6LL Feb 13 15:18:56.524049 kubelet[2698]: E0213 15:18:56.523993 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:57.384705 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:42408.service - OpenSSH per-connection server daemon (10.0.0.1:42408). Feb 13 15:18:57.421637 sshd[3542]: Accepted publickey for core from 10.0.0.1 port 42408 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:57.423010 sshd-session[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:57.427020 systemd-logind[1515]: New session 8 of user core. Feb 13 15:18:57.439894 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:18:57.554550 sshd[3545]: Connection closed by 10.0.0.1 port 42408 Feb 13 15:18:57.555062 sshd-session[3542]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:57.568698 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:42416.service - OpenSSH per-connection server daemon (10.0.0.1:42416). Feb 13 15:18:57.569082 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:42408.service: Deactivated successfully. Feb 13 15:18:57.571625 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:18:57.574274 systemd-logind[1515]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:18:57.576003 systemd-logind[1515]: Removed session 8. Feb 13 15:18:57.604804 sshd[3555]: Accepted publickey for core from 10.0.0.1 port 42416 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:57.606122 sshd-session[3555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:57.611284 systemd-logind[1515]: New session 9 of user core. Feb 13 15:18:57.617732 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:18:57.787762 sshd[3561]: Connection closed by 10.0.0.1 port 42416 Feb 13 15:18:57.783018 sshd-session[3555]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:57.813967 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:42416.service: Deactivated successfully. Feb 13 15:18:57.817789 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:18:57.820340 systemd-logind[1515]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:18:57.826728 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:42418.service - OpenSSH per-connection server daemon (10.0.0.1:42418). Feb 13 15:18:57.827979 systemd-logind[1515]: Removed session 9. Feb 13 15:18:57.866939 sshd[3572]: Accepted publickey for core from 10.0.0.1 port 42418 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:18:57.868413 sshd-session[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:57.872765 systemd-logind[1515]: New session 10 of user core. Feb 13 15:18:57.879712 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:18:57.991323 sshd[3575]: Connection closed by 10.0.0.1 port 42418 Feb 13 15:18:57.992692 sshd-session[3572]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:57.996745 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:42418.service: Deactivated successfully. Feb 13 15:18:57.998813 systemd-logind[1515]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:18:57.998837 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:18:58.000282 systemd-logind[1515]: Removed session 10. Feb 13 15:18:58.431370 kubelet[2698]: E0213 15:18:58.431226 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:58.431875 containerd[1530]: time="2025-02-13T15:18:58.431675046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pgwlz,Uid:2dbbdd0c-4d9f-4481-8c2d-3c9a03235138,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:58.465089 systemd-networkd[1223]: vethb2485fe1: Link UP Feb 13 15:18:58.466703 kernel: cni0: port 2(vethb2485fe1) entered blocking state Feb 13 15:18:58.466758 kernel: cni0: port 2(vethb2485fe1) entered disabled state Feb 13 15:18:58.466780 kernel: vethb2485fe1: entered allmulticast mode Feb 13 15:18:58.466792 kernel: vethb2485fe1: entered promiscuous mode Feb 13 15:18:58.471501 kernel: cni0: port 2(vethb2485fe1) entered blocking state Feb 13 15:18:58.471576 kernel: cni0: port 2(vethb2485fe1) entered forwarding state Feb 13 15:18:58.471259 systemd-networkd[1223]: vethb2485fe1: Gained carrier Feb 13 15:18:58.474505 containerd[1530]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014938), "name":"cbr0", "type":"bridge"} Feb 13 15:18:58.474505 containerd[1530]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:18:58.491215 containerd[1530]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T15:18:58.490916082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:58.491215 containerd[1530]: time="2025-02-13T15:18:58.490968402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:58.491215 containerd[1530]: time="2025-02-13T15:18:58.490979482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:58.491215 containerd[1530]: time="2025-02-13T15:18:58.491062682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:58.515419 systemd-resolved[1429]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:18:58.534719 containerd[1530]: time="2025-02-13T15:18:58.534645325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pgwlz,Uid:2dbbdd0c-4d9f-4481-8c2d-3c9a03235138,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d90f30222d574163142674580d81627c28f4938932785326ec147110b87cbc3\"" Feb 13 15:18:58.535738 kubelet[2698]: E0213 15:18:58.535702 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:58.539733 containerd[1530]: time="2025-02-13T15:18:58.539696389Z" level=info msg="CreateContainer within sandbox \"5d90f30222d574163142674580d81627c28f4938932785326ec147110b87cbc3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:18:58.555607 containerd[1530]: time="2025-02-13T15:18:58.555470182Z" level=info msg="CreateContainer within sandbox \"5d90f30222d574163142674580d81627c28f4938932785326ec147110b87cbc3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b3ac22a9121aaefcc8c10c6c5b38eee18249edc6b57833cb26469d7751c7a240\"" Feb 13 15:18:58.557508 containerd[1530]: time="2025-02-13T15:18:58.555991145Z" level=info msg="StartContainer for \"b3ac22a9121aaefcc8c10c6c5b38eee18249edc6b57833cb26469d7751c7a240\"" Feb 13 15:18:58.603027 containerd[1530]: time="2025-02-13T15:18:58.602984963Z" level=info msg="StartContainer for \"b3ac22a9121aaefcc8c10c6c5b38eee18249edc6b57833cb26469d7751c7a240\" returns successfully" Feb 13 15:18:59.534101 kubelet[2698]: E0213 15:18:59.533862 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:18:59.545504 kubelet[2698]: I0213 15:18:59.544355 2698 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-pgwlz" podStartSLOduration=22.544317553 podStartE2EDuration="22.544317553s" podCreationTimestamp="2025-02-13 15:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:59.544143953 +0000 UTC m=+37.201824629" watchObservedRunningTime="2025-02-13 15:18:59.544317553 +0000 UTC m=+37.201998269" Feb 13 15:18:59.761612 systemd-networkd[1223]: vethb2485fe1: Gained IPv6LL Feb 13 15:19:00.535496 kubelet[2698]: E0213 15:19:00.535102 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:19:01.537483 kubelet[2698]: E0213 15:19:01.537452 2698 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:19:03.018748 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:45526.service - OpenSSH per-connection server daemon (10.0.0.1:45526). Feb 13 15:19:03.063215 sshd[3727]: Accepted publickey for core from 10.0.0.1 port 45526 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:19:03.064670 sshd-session[3727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:03.070516 systemd-logind[1515]: New session 11 of user core. Feb 13 15:19:03.080792 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:19:03.196905 sshd[3730]: Connection closed by 10.0.0.1 port 45526 Feb 13 15:19:03.197485 sshd-session[3727]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:03.205712 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:45542.service - OpenSSH per-connection server daemon (10.0.0.1:45542). Feb 13 15:19:03.206114 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:45526.service: Deactivated successfully. Feb 13 15:19:03.209226 systemd-logind[1515]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:19:03.209327 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:19:03.210474 systemd-logind[1515]: Removed session 11. Feb 13 15:19:03.249515 sshd[3740]: Accepted publickey for core from 10.0.0.1 port 45542 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:19:03.251044 sshd-session[3740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:03.254883 systemd-logind[1515]: New session 12 of user core. Feb 13 15:19:03.275763 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:19:03.524942 sshd[3746]: Connection closed by 10.0.0.1 port 45542 Feb 13 15:19:03.525634 sshd-session[3740]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:03.536680 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:45552.service - OpenSSH per-connection server daemon (10.0.0.1:45552). Feb 13 15:19:03.537081 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:45542.service: Deactivated successfully. Feb 13 15:19:03.540116 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:19:03.541332 systemd-logind[1515]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:19:03.542815 systemd-logind[1515]: Removed session 12. Feb 13 15:19:03.579613 sshd[3753]: Accepted publickey for core from 10.0.0.1 port 45552 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:19:03.580993 sshd-session[3753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:03.585320 systemd-logind[1515]: New session 13 of user core. Feb 13 15:19:03.601756 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:19:04.713285 sshd[3759]: Connection closed by 10.0.0.1 port 45552 Feb 13 15:19:04.714092 sshd-session[3753]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:04.723816 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:45560.service - OpenSSH per-connection server daemon (10.0.0.1:45560). Feb 13 15:19:04.724685 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:45552.service: Deactivated successfully. Feb 13 15:19:04.726837 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:19:04.729149 systemd-logind[1515]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:19:04.732099 systemd-logind[1515]: Removed session 13. Feb 13 15:19:04.770743 sshd[3776]: Accepted publickey for core from 10.0.0.1 port 45560 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:19:04.772211 sshd-session[3776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:04.777501 systemd-logind[1515]: New session 14 of user core. Feb 13 15:19:04.786766 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:19:05.041592 sshd[3783]: Connection closed by 10.0.0.1 port 45560 Feb 13 15:19:05.042338 sshd-session[3776]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:05.051854 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:45564.service - OpenSSH per-connection server daemon (10.0.0.1:45564). Feb 13 15:19:05.052557 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:45560.service: Deactivated successfully. Feb 13 15:19:05.057084 systemd-logind[1515]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:19:05.057956 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:19:05.059970 systemd-logind[1515]: Removed session 14. Feb 13 15:19:05.092660 sshd[3811]: Accepted publickey for core from 10.0.0.1 port 45564 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:19:05.094222 sshd-session[3811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:05.099006 systemd-logind[1515]: New session 15 of user core. Feb 13 15:19:05.109828 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:19:05.230632 sshd[3817]: Connection closed by 10.0.0.1 port 45564 Feb 13 15:19:05.233478 sshd-session[3811]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:05.237356 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:45564.service: Deactivated successfully. Feb 13 15:19:05.239352 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:19:05.239359 systemd-logind[1515]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:19:05.240700 systemd-logind[1515]: Removed session 15. Feb 13 15:19:10.246782 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:45580.service - OpenSSH per-connection server daemon (10.0.0.1:45580). Feb 13 15:19:10.293883 sshd[3855]: Accepted publickey for core from 10.0.0.1 port 45580 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:19:10.296021 sshd-session[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:10.304073 systemd-logind[1515]: New session 16 of user core. Feb 13 15:19:10.311781 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:19:10.426740 sshd[3858]: Connection closed by 10.0.0.1 port 45580 Feb 13 15:19:10.427096 sshd-session[3855]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:10.430177 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:45580.service: Deactivated successfully. Feb 13 15:19:10.432093 systemd-logind[1515]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:19:10.432178 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:19:10.434780 systemd-logind[1515]: Removed session 16. Feb 13 15:19:15.437681 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:52334.service - OpenSSH per-connection server daemon (10.0.0.1:52334). Feb 13 15:19:15.474410 sshd[3891]: Accepted publickey for core from 10.0.0.1 port 52334 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:19:15.473438 sshd-session[3891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:15.478008 systemd-logind[1515]: New session 17 of user core. Feb 13 15:19:15.487718 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:19:15.603414 sshd[3894]: Connection closed by 10.0.0.1 port 52334 Feb 13 15:19:15.603801 sshd-session[3891]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:15.607679 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:52334.service: Deactivated successfully. Feb 13 15:19:15.610096 systemd-logind[1515]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:19:15.610249 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:19:15.611699 systemd-logind[1515]: Removed session 17. Feb 13 15:19:20.619775 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:52344.service - OpenSSH per-connection server daemon (10.0.0.1:52344). Feb 13 15:19:20.668272 sshd[3928]: Accepted publickey for core from 10.0.0.1 port 52344 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:19:20.670227 sshd-session[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:20.675870 systemd-logind[1515]: New session 18 of user core. Feb 13 15:19:20.689854 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:19:20.815442 sshd[3931]: Connection closed by 10.0.0.1 port 52344 Feb 13 15:19:20.817278 sshd-session[3928]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:20.820887 systemd-logind[1515]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:19:20.821535 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:52344.service: Deactivated successfully. Feb 13 15:19:20.824615 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:19:20.825953 systemd-logind[1515]: Removed session 18.