Jan 13 21:06:47.899982 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 21:06:47.900003 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 13 21:06:47.900012 kernel: KASLR enabled Jan 13 21:06:47.900018 kernel: efi: EFI v2.7 by EDK II Jan 13 21:06:47.900024 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 13 21:06:47.900029 kernel: random: crng init done Jan 13 21:06:47.900036 kernel: ACPI: Early table checksum verification disabled Jan 13 21:06:47.900042 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 13 21:06:47.900048 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 13 21:06:47.900056 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:06:47.900061 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:06:47.900068 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:06:47.900074 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:06:47.900081 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:06:47.900088 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:06:47.900096 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:06:47.900103 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:06:47.900109 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:06:47.900116 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 13 21:06:47.900122 kernel: NUMA: Failed to initialise from firmware Jan 13 21:06:47.900129 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:06:47.900135 kernel: NUMA: NODE_DATA [mem 0xdc959800-0xdc95efff] Jan 13 21:06:47.900141 kernel: Zone ranges: Jan 13 21:06:47.900148 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:06:47.900154 kernel: DMA32 empty Jan 13 21:06:47.900162 kernel: Normal empty Jan 13 21:06:47.900168 kernel: Movable zone start for each node Jan 13 21:06:47.900251 kernel: Early memory node ranges Jan 13 21:06:47.900260 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 13 21:06:47.900267 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 13 21:06:47.900273 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 13 21:06:47.900279 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 13 21:06:47.900285 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 13 21:06:47.900292 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 13 21:06:47.900298 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 13 21:06:47.900304 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:06:47.900310 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 13 21:06:47.900320 kernel: psci: probing for conduit method from ACPI. Jan 13 21:06:47.900326 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 21:06:47.900339 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 21:06:47.900362 kernel: psci: Trusted OS migration not required Jan 13 21:06:47.900369 kernel: psci: SMC Calling Convention v1.1 Jan 13 21:06:47.900376 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 21:06:47.900384 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 21:06:47.900390 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 21:06:47.900397 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 21:06:47.900404 kernel: Detected PIPT I-cache on CPU0 Jan 13 21:06:47.900410 kernel: CPU features: detected: GIC system register CPU interface Jan 13 21:06:47.900417 kernel: CPU features: detected: Hardware dirty bit management Jan 13 21:06:47.900424 kernel: CPU features: detected: Spectre-v4 Jan 13 21:06:47.900430 kernel: CPU features: detected: Spectre-BHB Jan 13 21:06:47.900437 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 21:06:47.900444 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 21:06:47.900452 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 21:06:47.900459 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 21:06:47.900465 kernel: alternatives: applying boot alternatives Jan 13 21:06:47.900473 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:06:47.900480 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:06:47.900487 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:06:47.900494 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:06:47.900500 kernel: Fallback order for Node 0: 0 Jan 13 21:06:47.900507 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 13 21:06:47.900513 kernel: Policy zone: DMA Jan 13 21:06:47.900520 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:06:47.900528 kernel: software IO TLB: area num 4. Jan 13 21:06:47.900535 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 13 21:06:47.900542 kernel: Memory: 2386536K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185752K reserved, 0K cma-reserved) Jan 13 21:06:47.900549 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:06:47.900556 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:06:47.900563 kernel: rcu: RCU event tracing is enabled. Jan 13 21:06:47.900569 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:06:47.900576 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:06:47.900583 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:06:47.900590 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:06:47.900596 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:06:47.900603 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 21:06:47.900611 kernel: GICv3: 256 SPIs implemented Jan 13 21:06:47.900617 kernel: GICv3: 0 Extended SPIs implemented Jan 13 21:06:47.900624 kernel: Root IRQ handler: gic_handle_irq Jan 13 21:06:47.900631 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 21:06:47.900637 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 21:06:47.900654 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 21:06:47.900661 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 21:06:47.900668 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 21:06:47.900675 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 13 21:06:47.900681 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 13 21:06:47.900688 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:06:47.900697 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:06:47.900703 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 21:06:47.900710 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 21:06:47.900717 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 21:06:47.900724 kernel: arm-pv: using stolen time PV Jan 13 21:06:47.900731 kernel: Console: colour dummy device 80x25 Jan 13 21:06:47.900738 kernel: ACPI: Core revision 20230628 Jan 13 21:06:47.900745 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 21:06:47.900751 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:06:47.900758 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:06:47.900766 kernel: landlock: Up and running. Jan 13 21:06:47.900773 kernel: SELinux: Initializing. Jan 13 21:06:47.900780 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:06:47.900786 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:06:47.900793 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:06:47.900800 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:06:47.900807 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:06:47.900814 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:06:47.900821 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 21:06:47.900829 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 21:06:47.900835 kernel: Remapping and enabling EFI services. Jan 13 21:06:47.900842 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:06:47.900849 kernel: Detected PIPT I-cache on CPU1 Jan 13 21:06:47.900856 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 21:06:47.900863 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 13 21:06:47.900870 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:06:47.900877 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 21:06:47.900883 kernel: Detected PIPT I-cache on CPU2 Jan 13 21:06:47.900890 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 21:06:47.900898 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 13 21:06:47.900905 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:06:47.900917 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 21:06:47.900925 kernel: Detected PIPT I-cache on CPU3 Jan 13 21:06:47.900932 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 21:06:47.900939 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 13 21:06:47.900947 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:06:47.900954 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 21:06:47.900961 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:06:47.900969 kernel: SMP: Total of 4 processors activated. Jan 13 21:06:47.900976 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 21:06:47.900984 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 21:06:47.900991 kernel: CPU features: detected: Common not Private translations Jan 13 21:06:47.900998 kernel: CPU features: detected: CRC32 instructions Jan 13 21:06:47.901005 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 21:06:47.901012 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 21:06:47.901019 kernel: CPU features: detected: LSE atomic instructions Jan 13 21:06:47.901028 kernel: CPU features: detected: Privileged Access Never Jan 13 21:06:47.901035 kernel: CPU features: detected: RAS Extension Support Jan 13 21:06:47.901042 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 21:06:47.901049 kernel: CPU: All CPU(s) started at EL1 Jan 13 21:06:47.901056 kernel: alternatives: applying system-wide alternatives Jan 13 21:06:47.901063 kernel: devtmpfs: initialized Jan 13 21:06:47.901070 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:06:47.901078 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:06:47.901085 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:06:47.901093 kernel: SMBIOS 3.0.0 present. Jan 13 21:06:47.901101 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 13 21:06:47.901108 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:06:47.901115 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 21:06:47.901122 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 21:06:47.901129 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 21:06:47.901137 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:06:47.901144 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jan 13 21:06:47.901151 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:06:47.901159 kernel: cpuidle: using governor menu Jan 13 21:06:47.901167 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 21:06:47.901178 kernel: ASID allocator initialised with 32768 entries Jan 13 21:06:47.901187 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:06:47.901194 kernel: Serial: AMBA PL011 UART driver Jan 13 21:06:47.901209 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 21:06:47.901217 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 21:06:47.901224 kernel: Modules: 509040 pages in range for PLT usage Jan 13 21:06:47.901231 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:06:47.901240 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:06:47.901247 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 21:06:47.901254 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 21:06:47.901261 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:06:47.901269 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:06:47.901276 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 21:06:47.901283 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 21:06:47.901290 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:06:47.901297 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:06:47.901305 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:06:47.901313 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:06:47.901320 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:06:47.901327 kernel: ACPI: Interpreter enabled Jan 13 21:06:47.901340 kernel: ACPI: Using GIC for interrupt routing Jan 13 21:06:47.901348 kernel: ACPI: MCFG table detected, 1 entries Jan 13 21:06:47.901355 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 21:06:47.901363 kernel: printk: console [ttyAMA0] enabled Jan 13 21:06:47.901370 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:06:47.901503 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:06:47.901576 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 21:06:47.901641 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 21:06:47.901702 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 21:06:47.901764 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 21:06:47.901773 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 21:06:47.901781 kernel: PCI host bridge to bus 0000:00 Jan 13 21:06:47.901851 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 21:06:47.901908 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 21:06:47.901965 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 21:06:47.902020 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:06:47.902096 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 21:06:47.902193 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:06:47.902266 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 13 21:06:47.902329 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 13 21:06:47.902407 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 21:06:47.902470 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 21:06:47.902535 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 13 21:06:47.902613 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 13 21:06:47.902674 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 21:06:47.902730 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 21:06:47.902790 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 21:06:47.902800 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 21:06:47.902808 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 21:06:47.902815 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 21:06:47.902822 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 21:06:47.902829 kernel: iommu: Default domain type: Translated Jan 13 21:06:47.902837 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 21:06:47.902844 kernel: efivars: Registered efivars operations Jan 13 21:06:47.902853 kernel: vgaarb: loaded Jan 13 21:06:47.902860 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 21:06:47.902867 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:06:47.902875 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:06:47.902882 kernel: pnp: PnP ACPI init Jan 13 21:06:47.902952 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 21:06:47.902962 kernel: pnp: PnP ACPI: found 1 devices Jan 13 21:06:47.902970 kernel: NET: Registered PF_INET protocol family Jan 13 21:06:47.902979 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:06:47.902986 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:06:47.902994 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:06:47.903001 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:06:47.903008 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:06:47.903016 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:06:47.903023 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:06:47.903030 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:06:47.903037 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:06:47.903045 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:06:47.903053 kernel: kvm [1]: HYP mode not available Jan 13 21:06:47.903060 kernel: Initialise system trusted keyrings Jan 13 21:06:47.903067 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:06:47.903074 kernel: Key type asymmetric registered Jan 13 21:06:47.903081 kernel: Asymmetric key parser 'x509' registered Jan 13 21:06:47.903088 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 21:06:47.903096 kernel: io scheduler mq-deadline registered Jan 13 21:06:47.903103 kernel: io scheduler kyber registered Jan 13 21:06:47.903111 kernel: io scheduler bfq registered Jan 13 21:06:47.903118 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 21:06:47.903126 kernel: ACPI: button: Power Button [PWRB] Jan 13 21:06:47.903133 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 21:06:47.903216 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 13 21:06:47.903227 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:06:47.903234 kernel: thunder_xcv, ver 1.0 Jan 13 21:06:47.903241 kernel: thunder_bgx, ver 1.0 Jan 13 21:06:47.903249 kernel: nicpf, ver 1.0 Jan 13 21:06:47.903258 kernel: nicvf, ver 1.0 Jan 13 21:06:47.903329 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 21:06:47.903402 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T21:06:47 UTC (1736802407) Jan 13 21:06:47.903412 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 21:06:47.903420 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 21:06:47.903427 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 21:06:47.903434 kernel: watchdog: Hard watchdog permanently disabled Jan 13 21:06:47.903442 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:06:47.903451 kernel: Segment Routing with IPv6 Jan 13 21:06:47.903458 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:06:47.903465 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:06:47.903473 kernel: Key type dns_resolver registered Jan 13 21:06:47.903480 kernel: registered taskstats version 1 Jan 13 21:06:47.903487 kernel: Loading compiled-in X.509 certificates Jan 13 21:06:47.903494 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 13 21:06:47.903502 kernel: Key type .fscrypt registered Jan 13 21:06:47.903509 kernel: Key type fscrypt-provisioning registered Jan 13 21:06:47.903517 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:06:47.903525 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:06:47.903532 kernel: ima: No architecture policies found Jan 13 21:06:47.903539 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 21:06:47.903546 kernel: clk: Disabling unused clocks Jan 13 21:06:47.903553 kernel: Freeing unused kernel memory: 39360K Jan 13 21:06:47.903561 kernel: Run /init as init process Jan 13 21:06:47.903568 kernel: with arguments: Jan 13 21:06:47.903575 kernel: /init Jan 13 21:06:47.903583 kernel: with environment: Jan 13 21:06:47.903590 kernel: HOME=/ Jan 13 21:06:47.903597 kernel: TERM=linux Jan 13 21:06:47.903604 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:06:47.903613 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:06:47.903622 systemd[1]: Detected virtualization kvm. Jan 13 21:06:47.903630 systemd[1]: Detected architecture arm64. Jan 13 21:06:47.903638 systemd[1]: Running in initrd. Jan 13 21:06:47.903647 systemd[1]: No hostname configured, using default hostname. Jan 13 21:06:47.903654 systemd[1]: Hostname set to . Jan 13 21:06:47.903662 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:06:47.903670 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:06:47.903678 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:06:47.903685 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:06:47.903694 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:06:47.903702 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:06:47.903711 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:06:47.903719 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:06:47.903728 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:06:47.903736 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:06:47.903744 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:06:47.903752 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:06:47.903761 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:06:47.903769 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:06:47.903776 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:06:47.903784 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:06:47.903792 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:06:47.903799 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:06:47.903807 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:06:47.903815 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:06:47.903823 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:06:47.903832 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:06:47.903840 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:06:47.903848 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:06:47.903855 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:06:47.903863 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:06:47.903871 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:06:47.903879 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:06:47.903886 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:06:47.903894 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:06:47.903903 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:06:47.903911 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:06:47.903919 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:06:47.903926 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:06:47.903935 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:06:47.903957 systemd-journald[239]: Collecting audit messages is disabled. Jan 13 21:06:47.903976 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:06:47.903985 systemd-journald[239]: Journal started Jan 13 21:06:47.904004 systemd-journald[239]: Runtime Journal (/run/log/journal/7a28643ead184fab8ddf2fb2bc492ed9) is 5.9M, max 47.3M, 41.4M free. Jan 13 21:06:47.896877 systemd-modules-load[240]: Inserted module 'overlay' Jan 13 21:06:47.912286 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:06:47.912304 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:06:47.913837 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 13 21:06:47.915427 kernel: Bridge firewalling registered Jan 13 21:06:47.915444 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:06:47.917212 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:06:47.919155 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:06:47.928371 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:06:47.929920 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:06:47.934208 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:06:47.935477 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:06:47.938004 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:06:47.943202 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:06:47.956303 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:06:47.957450 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:06:47.960841 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:06:47.969801 dracut-cmdline[276]: dracut-dracut-053 Jan 13 21:06:47.971815 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:06:47.985205 systemd-resolved[280]: Positive Trust Anchors: Jan 13 21:06:47.985223 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:06:47.985256 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:06:47.989873 systemd-resolved[280]: Defaulting to hostname 'linux'. Jan 13 21:06:47.993344 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:06:47.994469 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:06:48.029206 kernel: SCSI subsystem initialized Jan 13 21:06:48.033191 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:06:48.041202 kernel: iscsi: registered transport (tcp) Jan 13 21:06:48.053532 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:06:48.053557 kernel: QLogic iSCSI HBA Driver Jan 13 21:06:48.089832 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:06:48.097321 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:06:48.113434 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:06:48.113472 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:06:48.115046 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:06:48.163199 kernel: raid6: neonx8 gen() 15788 MB/s Jan 13 21:06:48.180194 kernel: raid6: neonx4 gen() 15657 MB/s Jan 13 21:06:48.197194 kernel: raid6: neonx2 gen() 13277 MB/s Jan 13 21:06:48.214187 kernel: raid6: neonx1 gen() 10492 MB/s Jan 13 21:06:48.231187 kernel: raid6: int64x8 gen() 6950 MB/s Jan 13 21:06:48.248188 kernel: raid6: int64x4 gen() 7356 MB/s Jan 13 21:06:48.265189 kernel: raid6: int64x2 gen() 6133 MB/s Jan 13 21:06:48.282284 kernel: raid6: int64x1 gen() 5063 MB/s Jan 13 21:06:48.282309 kernel: raid6: using algorithm neonx8 gen() 15788 MB/s Jan 13 21:06:48.300265 kernel: raid6: .... xor() 11928 MB/s, rmw enabled Jan 13 21:06:48.300281 kernel: raid6: using neon recovery algorithm Jan 13 21:06:48.305571 kernel: xor: measuring software checksum speed Jan 13 21:06:48.305586 kernel: 8regs : 19783 MB/sec Jan 13 21:06:48.306226 kernel: 32regs : 19622 MB/sec Jan 13 21:06:48.307440 kernel: arm64_neon : 26005 MB/sec Jan 13 21:06:48.307453 kernel: xor: using function: arm64_neon (26005 MB/sec) Jan 13 21:06:48.357196 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:06:48.367408 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:06:48.383321 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:06:48.394289 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jan 13 21:06:48.397354 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:06:48.416337 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:06:48.427558 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Jan 13 21:06:48.453156 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:06:48.463386 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:06:48.501684 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:06:48.510401 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:06:48.523857 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:06:48.525687 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:06:48.529290 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:06:48.530442 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:06:48.536243 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 13 21:06:48.555742 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:06:48.555841 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:06:48.555853 kernel: GPT:9289727 != 19775487 Jan 13 21:06:48.555862 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:06:48.555872 kernel: GPT:9289727 != 19775487 Jan 13 21:06:48.555881 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:06:48.555890 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:06:48.539377 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:06:48.554437 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:06:48.557442 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:06:48.557559 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:06:48.561374 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:06:48.562624 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:06:48.562829 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:06:48.564680 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:06:48.578409 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:06:48.584191 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (515) Jan 13 21:06:48.584227 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (531) Jan 13 21:06:48.593601 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:06:48.595943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:06:48.605966 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:06:48.612322 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:06:48.613551 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:06:48.619684 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:06:48.635375 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:06:48.637755 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:06:48.641647 disk-uuid[556]: Primary Header is updated. Jan 13 21:06:48.641647 disk-uuid[556]: Secondary Entries is updated. Jan 13 21:06:48.641647 disk-uuid[556]: Secondary Header is updated. Jan 13 21:06:48.645195 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:06:48.660849 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:06:49.655220 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:06:49.655882 disk-uuid[557]: The operation has completed successfully. Jan 13 21:06:49.678777 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:06:49.678877 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:06:49.699373 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:06:49.701949 sh[579]: Success Jan 13 21:06:49.716337 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 21:06:49.743377 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:06:49.761410 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:06:49.763765 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:06:49.772560 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 13 21:06:49.772591 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:06:49.772602 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:06:49.773610 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:06:49.775182 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:06:49.777979 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:06:49.779348 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:06:49.789367 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:06:49.790883 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:06:49.799599 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:06:49.799634 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:06:49.799649 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:06:49.802191 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:06:49.809249 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:06:49.810915 kernel: BTRFS info (device vda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:06:49.817205 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:06:49.822313 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:06:49.878004 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:06:49.884321 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:06:49.913796 ignition[677]: Ignition 2.19.0 Jan 13 21:06:49.913805 ignition[677]: Stage: fetch-offline Jan 13 21:06:49.913839 ignition[677]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:06:49.913847 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:06:49.913998 ignition[677]: parsed url from cmdline: "" Jan 13 21:06:49.916816 systemd-networkd[771]: lo: Link UP Jan 13 21:06:49.914002 ignition[677]: no config URL provided Jan 13 21:06:49.916819 systemd-networkd[771]: lo: Gained carrier Jan 13 21:06:49.914006 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:06:49.917641 systemd-networkd[771]: Enumeration completed Jan 13 21:06:49.914012 ignition[677]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:06:49.918038 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:06:49.914036 ignition[677]: op(1): [started] loading QEMU firmware config module Jan 13 21:06:49.918106 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:06:49.914043 ignition[677]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:06:49.918109 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:06:49.918841 systemd-networkd[771]: eth0: Link UP Jan 13 21:06:49.918844 systemd-networkd[771]: eth0: Gained carrier Jan 13 21:06:49.931504 ignition[677]: op(1): [finished] loading QEMU firmware config module Jan 13 21:06:49.918850 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:06:49.919552 systemd[1]: Reached target network.target - Network. Jan 13 21:06:49.942237 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:06:49.956598 ignition[677]: parsing config with SHA512: c6aa08946ddea280428998eb0bc80e1c327ee50397989d7818c34dab6556c8b6430d73628327b6b773c4bfe5eacc2da93f86c7cbf813e1c64b51a4ded44faeb8 Jan 13 21:06:49.960605 unknown[677]: fetched base config from "system" Jan 13 21:06:49.960615 unknown[677]: fetched user config from "qemu" Jan 13 21:06:49.961007 ignition[677]: fetch-offline: fetch-offline passed Jan 13 21:06:49.962444 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:06:49.961065 ignition[677]: Ignition finished successfully Jan 13 21:06:49.964546 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:06:49.975356 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:06:49.985004 ignition[777]: Ignition 2.19.0 Jan 13 21:06:49.985014 ignition[777]: Stage: kargs Jan 13 21:06:49.985169 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:06:49.987466 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:06:49.985195 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:06:49.986002 ignition[777]: kargs: kargs passed Jan 13 21:06:49.990051 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:06:49.986041 ignition[777]: Ignition finished successfully Jan 13 21:06:50.002593 ignition[784]: Ignition 2.19.0 Jan 13 21:06:50.002603 ignition[784]: Stage: disks Jan 13 21:06:50.002752 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:06:50.002761 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:06:50.003581 ignition[784]: disks: disks passed Jan 13 21:06:50.005383 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:06:50.003620 ignition[784]: Ignition finished successfully Jan 13 21:06:50.006791 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:06:50.008166 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:06:50.010109 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:06:50.011643 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:06:50.013453 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:06:50.021294 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:06:50.031026 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:06:50.034415 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:06:50.039010 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:06:50.081053 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:06:50.082541 kernel: EXT4-fs (vda9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 13 21:06:50.082216 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:06:50.094309 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:06:50.096047 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:06:50.097224 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:06:50.097295 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:06:50.104665 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (803) Jan 13 21:06:50.097344 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:06:50.109234 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:06:50.109261 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:06:50.109272 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:06:50.101418 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:06:50.103431 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:06:50.113208 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:06:50.114139 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:06:50.146475 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:06:50.150158 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:06:50.153780 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:06:50.157501 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:06:50.224604 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:06:50.237271 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:06:50.239643 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:06:50.245188 kernel: BTRFS info (device vda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:06:50.260250 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:06:50.262153 ignition[917]: INFO : Ignition 2.19.0 Jan 13 21:06:50.262153 ignition[917]: INFO : Stage: mount Jan 13 21:06:50.263640 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:06:50.263640 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:06:50.263640 ignition[917]: INFO : mount: mount passed Jan 13 21:06:50.263640 ignition[917]: INFO : Ignition finished successfully Jan 13 21:06:50.265264 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:06:50.277304 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:06:50.771450 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:06:50.779332 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:06:50.786758 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (930) Jan 13 21:06:50.786800 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:06:50.786820 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:06:50.788277 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:06:50.790200 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:06:50.791410 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:06:50.805891 ignition[947]: INFO : Ignition 2.19.0 Jan 13 21:06:50.805891 ignition[947]: INFO : Stage: files Jan 13 21:06:50.807530 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:06:50.807530 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:06:50.807530 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:06:50.810946 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:06:50.810946 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:06:50.810946 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:06:50.810946 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:06:50.810946 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:06:50.810137 unknown[947]: wrote ssh authorized keys file for user: core Jan 13 21:06:50.818262 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:06:50.818262 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 21:06:50.894474 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:06:51.056894 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:06:51.056894 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:06:51.060609 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:06:51.060609 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:06:51.060609 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:06:51.060609 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:06:51.060609 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:06:51.060609 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:06:51.060609 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:06:51.060609 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:06:51.060609 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:06:51.060609 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 21:06:51.060609 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 21:06:51.060609 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 21:06:51.060609 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 13 21:06:51.410519 systemd-networkd[771]: eth0: Gained IPv6LL Jan 13 21:06:51.443289 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:06:52.035492 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 21:06:52.037974 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:06:52.037974 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:06:52.037974 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:06:52.037974 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:06:52.037974 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 21:06:52.037974 ignition[947]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:06:52.037974 ignition[947]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:06:52.037974 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 21:06:52.037974 ignition[947]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:06:52.060389 ignition[947]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:06:52.063828 ignition[947]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:06:52.065465 ignition[947]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:06:52.065465 ignition[947]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:06:52.065465 ignition[947]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:06:52.065465 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:06:52.065465 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:06:52.065465 ignition[947]: INFO : files: files passed Jan 13 21:06:52.065465 ignition[947]: INFO : Ignition finished successfully Jan 13 21:06:52.065942 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:06:52.082345 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:06:52.084138 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:06:52.086476 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:06:52.087223 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:06:52.092692 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:06:52.095424 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:06:52.095424 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:06:52.098611 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:06:52.098131 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:06:52.100143 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:06:52.111311 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:06:52.130627 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:06:52.130749 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:06:52.132998 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:06:52.134903 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:06:52.136726 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:06:52.137577 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:06:52.153896 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:06:52.170312 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:06:52.177916 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:06:52.179246 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:06:52.181459 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:06:52.183280 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:06:52.183395 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:06:52.186063 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:06:52.187226 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:06:52.189163 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:06:52.191146 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:06:52.193056 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:06:52.194935 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:06:52.196759 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:06:52.198755 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:06:52.200458 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:06:52.202287 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:06:52.203806 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:06:52.203935 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:06:52.206183 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:06:52.208012 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:06:52.209871 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:06:52.209945 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:06:52.211792 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:06:52.211903 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:06:52.214376 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:06:52.214480 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:06:52.216871 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:06:52.218411 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:06:52.222206 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:06:52.224402 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:06:52.226417 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:06:52.228029 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:06:52.228115 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:06:52.229694 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:06:52.229807 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:06:52.231418 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:06:52.231525 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:06:52.233319 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:06:52.233414 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:06:52.250375 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:06:52.251315 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:06:52.251479 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:06:52.257375 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:06:52.258302 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:06:52.258435 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:06:52.263248 ignition[1001]: INFO : Ignition 2.19.0 Jan 13 21:06:52.263248 ignition[1001]: INFO : Stage: umount Jan 13 21:06:52.263248 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:06:52.263248 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:06:52.263248 ignition[1001]: INFO : umount: umount passed Jan 13 21:06:52.263248 ignition[1001]: INFO : Ignition finished successfully Jan 13 21:06:52.260368 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:06:52.260466 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:06:52.264746 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:06:52.264844 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:06:52.266715 systemd[1]: Stopped target network.target - Network. Jan 13 21:06:52.267872 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:06:52.267932 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:06:52.270110 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:06:52.270165 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:06:52.271990 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:06:52.272034 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:06:52.273878 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:06:52.273920 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:06:52.276094 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:06:52.278082 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:06:52.280741 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:06:52.281414 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:06:52.281506 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:06:52.285253 systemd-networkd[771]: eth0: DHCPv6 lease lost Jan 13 21:06:52.287605 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:06:52.287709 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:06:52.289390 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:06:52.289419 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:06:52.298262 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:06:52.299594 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:06:52.299655 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:06:52.301788 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:06:52.304520 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:06:52.305588 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:06:52.312554 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:06:52.312608 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:06:52.314567 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:06:52.314613 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:06:52.316751 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:06:52.316792 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:06:52.320259 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:06:52.321996 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:06:52.323635 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:06:52.323715 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:06:52.326011 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:06:52.326078 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:06:52.328263 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:06:52.328298 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:06:52.329473 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:06:52.329518 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:06:52.333398 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:06:52.333439 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:06:52.335293 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:06:52.335334 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:06:52.343318 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:06:52.344893 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:06:52.344943 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:06:52.346929 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:06:52.346971 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:06:52.349161 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:06:52.349252 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:06:52.350822 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:06:52.350885 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:06:52.353996 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:06:52.355265 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:06:52.355334 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:06:52.358107 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:06:52.367707 systemd[1]: Switching root. Jan 13 21:06:52.394463 systemd-journald[239]: Journal stopped Jan 13 21:06:53.071326 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 13 21:06:53.071380 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:06:53.071392 kernel: SELinux: policy capability open_perms=1 Jan 13 21:06:53.071402 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:06:53.071416 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:06:53.071428 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:06:53.071438 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:06:53.071448 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:06:53.071458 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:06:53.071467 kernel: audit: type=1403 audit(1736802412.538:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:06:53.071481 systemd[1]: Successfully loaded SELinux policy in 31.408ms. Jan 13 21:06:53.071498 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.140ms. Jan 13 21:06:53.071510 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:06:53.071521 systemd[1]: Detected virtualization kvm. Jan 13 21:06:53.071533 systemd[1]: Detected architecture arm64. Jan 13 21:06:53.071543 systemd[1]: Detected first boot. Jan 13 21:06:53.071553 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:06:53.071564 zram_generator::config[1047]: No configuration found. Jan 13 21:06:53.071575 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:06:53.071586 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:06:53.071596 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:06:53.071607 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:06:53.071620 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:06:53.071630 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:06:53.071641 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:06:53.071651 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:06:53.071662 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:06:53.071673 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:06:53.071684 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:06:53.071694 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:06:53.071706 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:06:53.071718 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:06:53.071729 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:06:53.071740 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:06:53.071751 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:06:53.071762 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:06:53.071773 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 21:06:53.071783 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:06:53.071794 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:06:53.071807 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:06:53.071818 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:06:53.071829 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:06:53.071840 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:06:53.071850 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:06:53.071861 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:06:53.071872 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:06:53.071883 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:06:53.071895 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:06:53.071905 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:06:53.071916 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:06:53.071926 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:06:53.071937 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:06:53.071948 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:06:53.071959 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:06:53.071969 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:06:53.071979 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:06:53.071991 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:06:53.072002 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:06:53.072013 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:06:53.072024 systemd[1]: Reached target machines.target - Containers. Jan 13 21:06:53.072035 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:06:53.072045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:06:53.072056 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:06:53.072067 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:06:53.072077 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:06:53.072089 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:06:53.072099 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:06:53.072113 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:06:53.072130 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:06:53.072141 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:06:53.072151 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:06:53.072162 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:06:53.072172 kernel: fuse: init (API version 7.39) Jan 13 21:06:53.072193 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:06:53.072208 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:06:53.072218 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:06:53.072229 kernel: loop: module loaded Jan 13 21:06:53.072239 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:06:53.072250 kernel: ACPI: bus type drm_connector registered Jan 13 21:06:53.072260 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:06:53.072271 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:06:53.072281 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:06:53.072293 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:06:53.072319 systemd-journald[1128]: Collecting audit messages is disabled. Jan 13 21:06:53.072341 systemd[1]: Stopped verity-setup.service. Jan 13 21:06:53.072352 systemd-journald[1128]: Journal started Jan 13 21:06:53.072373 systemd-journald[1128]: Runtime Journal (/run/log/journal/7a28643ead184fab8ddf2fb2bc492ed9) is 5.9M, max 47.3M, 41.4M free. Jan 13 21:06:52.870732 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:06:52.888725 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:06:52.889051 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:06:53.074991 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:06:53.075662 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:06:53.076842 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:06:53.078026 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:06:53.079169 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:06:53.080315 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:06:53.081492 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:06:53.082656 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:06:53.084002 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:06:53.085445 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:06:53.085576 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:06:53.087012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:06:53.087150 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:06:53.089496 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:06:53.089626 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:06:53.091028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:06:53.091184 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:06:53.092594 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:06:53.092716 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:06:53.094004 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:06:53.094141 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:06:53.095543 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:06:53.096846 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:06:53.098300 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:06:53.109648 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:06:53.122284 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:06:53.124220 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:06:53.125262 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:06:53.125296 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:06:53.127104 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:06:53.129162 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:06:53.131227 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:06:53.132312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:06:53.133712 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:06:53.135558 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:06:53.136899 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:06:53.140362 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:06:53.141568 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:06:53.142623 systemd-journald[1128]: Time spent on flushing to /var/log/journal/7a28643ead184fab8ddf2fb2bc492ed9 is 22.675ms for 851 entries. Jan 13 21:06:53.142623 systemd-journald[1128]: System Journal (/var/log/journal/7a28643ead184fab8ddf2fb2bc492ed9) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:06:53.170939 systemd-journald[1128]: Received client request to flush runtime journal. Jan 13 21:06:53.170982 kernel: loop0: detected capacity change from 0 to 194096 Jan 13 21:06:53.144750 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:06:53.146813 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:06:53.150452 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:06:53.154802 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:06:53.156416 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:06:53.157905 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:06:53.159615 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:06:53.161867 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:06:53.166314 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:06:53.176340 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:06:53.179367 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:06:53.182236 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:06:53.184196 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:06:53.189353 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:06:53.199247 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:06:53.200228 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:06:53.209370 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:06:53.212357 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:06:53.214191 kernel: loop1: detected capacity change from 0 to 114432 Jan 13 21:06:53.229306 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:06:53.244219 kernel: loop2: detected capacity change from 0 to 114328 Jan 13 21:06:53.246704 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jan 13 21:06:53.246721 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jan 13 21:06:53.253599 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:06:53.288209 kernel: loop3: detected capacity change from 0 to 194096 Jan 13 21:06:53.293199 kernel: loop4: detected capacity change from 0 to 114432 Jan 13 21:06:53.297198 kernel: loop5: detected capacity change from 0 to 114328 Jan 13 21:06:53.299915 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:06:53.300334 (sd-merge)[1185]: Merged extensions into '/usr'. Jan 13 21:06:53.304020 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:06:53.304033 systemd[1]: Reloading... Jan 13 21:06:53.352199 zram_generator::config[1208]: No configuration found. Jan 13 21:06:53.414052 ldconfig[1153]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:06:53.442747 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:06:53.477919 systemd[1]: Reloading finished in 173 ms. Jan 13 21:06:53.515027 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:06:53.516460 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:06:53.535341 systemd[1]: Starting ensure-sysext.service... Jan 13 21:06:53.537250 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:06:53.542152 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:06:53.542163 systemd[1]: Reloading... Jan 13 21:06:53.554091 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:06:53.554403 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:06:53.555036 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:06:53.555349 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 13 21:06:53.555407 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 13 21:06:53.557614 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:06:53.557626 systemd-tmpfiles[1246]: Skipping /boot Jan 13 21:06:53.565327 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:06:53.565340 systemd-tmpfiles[1246]: Skipping /boot Jan 13 21:06:53.587269 zram_generator::config[1275]: No configuration found. Jan 13 21:06:53.668449 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:06:53.703511 systemd[1]: Reloading finished in 161 ms. Jan 13 21:06:53.717211 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:06:53.724652 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:06:53.731911 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:06:53.734547 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:06:53.736722 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:06:53.741475 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:06:53.748473 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:06:53.752634 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:06:53.756494 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:06:53.758264 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:06:53.761821 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:06:53.766586 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:06:53.767894 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:06:53.772010 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:06:53.776444 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:06:53.778271 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:06:53.778423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:06:53.780305 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:06:53.780439 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:06:53.782504 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:06:53.782641 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:06:53.784865 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Jan 13 21:06:53.788763 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:06:53.788991 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:06:53.798521 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:06:53.800247 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:06:53.802140 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:06:53.803814 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:06:53.810987 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:06:53.812523 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:06:53.815505 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:06:53.824396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:06:53.828191 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:06:53.831052 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:06:53.832574 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:06:53.834556 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:06:53.836651 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:06:53.837516 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:06:53.837640 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:06:53.845215 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:06:53.845345 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:06:53.847748 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:06:53.847864 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:06:53.859272 systemd[1]: Finished ensure-sysext.service. Jan 13 21:06:53.862960 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 21:06:53.865362 augenrules[1373]: No rules Jan 13 21:06:53.866490 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:06:53.874225 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1357) Jan 13 21:06:53.880464 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:06:53.884844 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:06:53.886400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:06:53.886447 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:06:53.894511 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:06:53.896124 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:06:53.896543 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:06:53.898115 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:06:53.898444 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:06:53.901416 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:06:53.901550 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:06:53.910593 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:06:53.923350 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:06:53.924621 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:06:53.928934 systemd-resolved[1315]: Positive Trust Anchors: Jan 13 21:06:53.928960 systemd-resolved[1315]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:06:53.928992 systemd-resolved[1315]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:06:53.937743 systemd-resolved[1315]: Defaulting to hostname 'linux'. Jan 13 21:06:53.945650 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:06:53.947658 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:06:53.949133 systemd-networkd[1371]: lo: Link UP Jan 13 21:06:53.949356 systemd-networkd[1371]: lo: Gained carrier Jan 13 21:06:53.950023 systemd-networkd[1371]: Enumeration completed Jan 13 21:06:53.950394 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:06:53.954487 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:06:53.954560 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:06:53.954927 systemd[1]: Reached target network.target - Network. Jan 13 21:06:53.955245 systemd-networkd[1371]: eth0: Link UP Jan 13 21:06:53.955348 systemd-networkd[1371]: eth0: Gained carrier Jan 13 21:06:53.955400 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:06:53.958356 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:06:53.966448 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:06:53.968685 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:06:53.970132 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:06:53.971661 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:06:53.974272 systemd-networkd[1371]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:06:53.974688 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Jan 13 21:06:53.975886 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:06:53.975941 systemd-timesyncd[1391]: Initial clock synchronization to Mon 2025-01-13 21:06:53.902887 UTC. Jan 13 21:06:53.980207 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:06:53.995419 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:06:54.011216 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:06:54.013045 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:06:54.044542 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:06:54.045941 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:06:54.047021 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:06:54.048144 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:06:54.049324 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:06:54.050643 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:06:54.051767 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:06:54.052952 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:06:54.054128 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:06:54.054164 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:06:54.055003 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:06:54.056644 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:06:54.058802 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:06:54.070925 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:06:54.072909 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:06:54.074420 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:06:54.075554 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:06:54.076462 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:06:54.077395 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:06:54.077420 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:06:54.078170 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:06:54.080014 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:06:54.081293 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:06:54.082702 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:06:54.086213 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:06:54.087322 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:06:54.093013 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:06:54.093336 jq[1416]: false Jan 13 21:06:54.097460 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:06:54.098134 extend-filesystems[1417]: Found loop3 Jan 13 21:06:54.098134 extend-filesystems[1417]: Found loop4 Jan 13 21:06:54.098134 extend-filesystems[1417]: Found loop5 Jan 13 21:06:54.098134 extend-filesystems[1417]: Found vda Jan 13 21:06:54.098134 extend-filesystems[1417]: Found vda1 Jan 13 21:06:54.107601 extend-filesystems[1417]: Found vda2 Jan 13 21:06:54.107601 extend-filesystems[1417]: Found vda3 Jan 13 21:06:54.107601 extend-filesystems[1417]: Found usr Jan 13 21:06:54.107601 extend-filesystems[1417]: Found vda4 Jan 13 21:06:54.107601 extend-filesystems[1417]: Found vda6 Jan 13 21:06:54.107601 extend-filesystems[1417]: Found vda7 Jan 13 21:06:54.107601 extend-filesystems[1417]: Found vda9 Jan 13 21:06:54.107601 extend-filesystems[1417]: Checking size of /dev/vda9 Jan 13 21:06:54.102525 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:06:54.114071 dbus-daemon[1415]: [system] SELinux support is enabled Jan 13 21:06:54.105503 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:06:54.112323 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:06:54.122222 extend-filesystems[1417]: Resized partition /dev/vda9 Jan 13 21:06:54.124530 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1348) Jan 13 21:06:54.122341 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:06:54.122799 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:06:54.131238 extend-filesystems[1435]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:06:54.138552 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:06:54.133461 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:06:54.138451 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:06:54.140670 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:06:54.145513 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:06:54.148454 jq[1439]: true Jan 13 21:06:54.150545 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:06:54.150731 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:06:54.150988 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:06:54.151128 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:06:54.153561 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:06:54.153718 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:06:54.171365 (ntainerd)[1444]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:06:54.178206 jq[1442]: true Jan 13 21:06:54.183695 systemd-logind[1431]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 21:06:54.183884 systemd-logind[1431]: New seat seat0. Jan 13 21:06:54.185092 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:06:54.188545 update_engine[1436]: I20250113 21:06:54.188326 1436 main.cc:92] Flatcar Update Engine starting Jan 13 21:06:54.192056 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:06:54.192216 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:06:54.193769 update_engine[1436]: I20250113 21:06:54.193718 1436 update_check_scheduler.cc:74] Next update check in 7m49s Jan 13 21:06:54.194186 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:06:54.194214 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:06:54.194494 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:06:54.197908 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:06:54.200580 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:06:54.203466 extend-filesystems[1435]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:06:54.203466 extend-filesystems[1435]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:06:54.203466 extend-filesystems[1435]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:06:54.208253 extend-filesystems[1417]: Resized filesystem in /dev/vda9 Jan 13 21:06:54.206642 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:06:54.211266 tar[1441]: linux-arm64/helm Jan 13 21:06:54.208217 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:06:54.248477 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:06:54.250099 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:06:54.251930 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:06:54.285331 locksmithd[1455]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:06:54.379237 containerd[1444]: time="2025-01-13T21:06:54.379060929Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:06:54.403560 containerd[1444]: time="2025-01-13T21:06:54.403505990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:06:54.404857 containerd[1444]: time="2025-01-13T21:06:54.404824876Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:06:54.405255 containerd[1444]: time="2025-01-13T21:06:54.404928250Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:06:54.405255 containerd[1444]: time="2025-01-13T21:06:54.404950883Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:06:54.405255 containerd[1444]: time="2025-01-13T21:06:54.405096510Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:06:54.405255 containerd[1444]: time="2025-01-13T21:06:54.405113316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:06:54.405255 containerd[1444]: time="2025-01-13T21:06:54.405162188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:06:54.405255 containerd[1444]: time="2025-01-13T21:06:54.405193422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:06:54.405694 containerd[1444]: time="2025-01-13T21:06:54.405623684Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:06:54.405767 containerd[1444]: time="2025-01-13T21:06:54.405751077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:06:54.405883 containerd[1444]: time="2025-01-13T21:06:54.405866223Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:06:54.406671 containerd[1444]: time="2025-01-13T21:06:54.405973957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:06:54.406671 containerd[1444]: time="2025-01-13T21:06:54.406082166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:06:54.406671 containerd[1444]: time="2025-01-13T21:06:54.406291530Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:06:54.406671 containerd[1444]: time="2025-01-13T21:06:54.406400849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:06:54.406671 containerd[1444]: time="2025-01-13T21:06:54.406415277Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:06:54.406671 containerd[1444]: time="2025-01-13T21:06:54.406494195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:06:54.406671 containerd[1444]: time="2025-01-13T21:06:54.406541640Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:06:54.409671 containerd[1444]: time="2025-01-13T21:06:54.409646576Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:06:54.409789 containerd[1444]: time="2025-01-13T21:06:54.409772543Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:06:54.409898 containerd[1444]: time="2025-01-13T21:06:54.409882259Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:06:54.410008 containerd[1444]: time="2025-01-13T21:06:54.409946590Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:06:54.410068 containerd[1444]: time="2025-01-13T21:06:54.410055711Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:06:54.410305 containerd[1444]: time="2025-01-13T21:06:54.410284259Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:06:54.410720 containerd[1444]: time="2025-01-13T21:06:54.410699696Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:06:54.411166 containerd[1444]: time="2025-01-13T21:06:54.411139747Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:06:54.411853 containerd[1444]: time="2025-01-13T21:06:54.411801926Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:06:54.411999 containerd[1444]: time="2025-01-13T21:06:54.411980689Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:06:54.412118 containerd[1444]: time="2025-01-13T21:06:54.412102098Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:06:54.412227 containerd[1444]: time="2025-01-13T21:06:54.412211734Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:06:54.412363 containerd[1444]: time="2025-01-13T21:06:54.412346025Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:06:54.412447 containerd[1444]: time="2025-01-13T21:06:54.412432790Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:06:54.412612 containerd[1444]: time="2025-01-13T21:06:54.412546153Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:06:54.412612 containerd[1444]: time="2025-01-13T21:06:54.412567121Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:06:54.412612 containerd[1444]: time="2025-01-13T21:06:54.412580042Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:06:54.412612 containerd[1444]: time="2025-01-13T21:06:54.412590982Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:06:54.412788 containerd[1444]: time="2025-01-13T21:06:54.412764950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:06:54.412900 containerd[1444]: time="2025-01-13T21:06:54.412838516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:06:54.412900 containerd[1444]: time="2025-01-13T21:06:54.412856789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:06:54.412900 containerd[1444]: time="2025-01-13T21:06:54.412878074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:06:54.413097 containerd[1444]: time="2025-01-13T21:06:54.412891590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:06:54.413097 containerd[1444]: time="2025-01-13T21:06:54.413054261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:06:54.413097 containerd[1444]: time="2025-01-13T21:06:54.413069165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:06:54.413097 containerd[1444]: time="2025-01-13T21:06:54.413081968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:06:54.413399 containerd[1444]: time="2025-01-13T21:06:54.413308335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:06:54.413399 containerd[1444]: time="2025-01-13T21:06:54.413348369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:06:54.413399 containerd[1444]: time="2025-01-13T21:06:54.413362163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:06:54.413399 containerd[1444]: time="2025-01-13T21:06:54.413373618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:06:54.413399 containerd[1444]: time="2025-01-13T21:06:54.413385667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:06:54.413645 containerd[1444]: time="2025-01-13T21:06:54.413584408Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:06:54.413645 containerd[1444]: time="2025-01-13T21:06:54.413618377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:06:54.413731 containerd[1444]: time="2025-01-13T21:06:54.413631338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:06:54.413905 containerd[1444]: time="2025-01-13T21:06:54.413812837Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:06:54.414646 containerd[1444]: time="2025-01-13T21:06:54.414034250Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:06:54.414646 containerd[1444]: time="2025-01-13T21:06:54.414110234Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:06:54.414646 containerd[1444]: time="2025-01-13T21:06:54.414123077Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:06:54.414646 containerd[1444]: time="2025-01-13T21:06:54.414135563Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:06:54.414646 containerd[1444]: time="2025-01-13T21:06:54.414154271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:06:54.414646 containerd[1444]: time="2025-01-13T21:06:54.414170721Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:06:54.414646 containerd[1444]: time="2025-01-13T21:06:54.414197079Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:06:54.414646 containerd[1444]: time="2025-01-13T21:06:54.414210160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:06:54.414843 containerd[1444]: time="2025-01-13T21:06:54.414533797Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:06:54.414843 containerd[1444]: time="2025-01-13T21:06:54.414587426Z" level=info msg="Connect containerd service" Jan 13 21:06:54.414843 containerd[1444]: time="2025-01-13T21:06:54.414612239Z" level=info msg="using legacy CRI server" Jan 13 21:06:54.414843 containerd[1444]: time="2025-01-13T21:06:54.414618779Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:06:54.414843 containerd[1444]: time="2025-01-13T21:06:54.414698014Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:06:54.415498 containerd[1444]: time="2025-01-13T21:06:54.415351432Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:06:54.415684 containerd[1444]: time="2025-01-13T21:06:54.415573440Z" level=info msg="Start subscribing containerd event" Jan 13 21:06:54.415684 containerd[1444]: time="2025-01-13T21:06:54.415636185Z" level=info msg="Start recovering state" Jan 13 21:06:54.415740 containerd[1444]: time="2025-01-13T21:06:54.415703925Z" level=info msg="Start event monitor" Jan 13 21:06:54.415740 containerd[1444]: time="2025-01-13T21:06:54.415716886Z" level=info msg="Start snapshots syncer" Jan 13 21:06:54.415740 containerd[1444]: time="2025-01-13T21:06:54.415725924Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:06:54.415740 containerd[1444]: time="2025-01-13T21:06:54.415732900Z" level=info msg="Start streaming server" Jan 13 21:06:54.415952 containerd[1444]: time="2025-01-13T21:06:54.415797588Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:06:54.415952 containerd[1444]: time="2025-01-13T21:06:54.415833618Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:06:54.415948 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:06:54.417615 containerd[1444]: time="2025-01-13T21:06:54.417591961Z" level=info msg="containerd successfully booted in 0.039251s" Jan 13 21:06:54.523929 tar[1441]: linux-arm64/LICENSE Jan 13 21:06:54.524008 tar[1441]: linux-arm64/README.md Jan 13 21:06:54.536441 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:06:55.634312 systemd-networkd[1371]: eth0: Gained IPv6LL Jan 13 21:06:55.638738 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:06:55.640874 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:06:55.656350 sshd_keygen[1437]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:06:55.658430 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:06:55.660636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:06:55.662899 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:06:55.679068 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:06:55.680731 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:06:55.680881 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:06:55.682351 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:06:55.685327 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:06:55.686351 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:06:55.692997 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:06:55.693163 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:06:55.696025 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:06:55.706951 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:06:55.709548 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:06:55.711560 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 21:06:55.712963 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:06:56.179490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:06:56.181024 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:06:56.183033 (kubelet)[1528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:06:56.183397 systemd[1]: Startup finished in 577ms (kernel) + 4.838s (initrd) + 3.677s (userspace) = 9.093s. Jan 13 21:06:56.633872 kubelet[1528]: E0113 21:06:56.633822 1528 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:06:56.636246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:06:56.636405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:07:00.408862 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:07:00.409891 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:50294.service - OpenSSH per-connection server daemon (10.0.0.1:50294). Jan 13 21:07:00.465497 sshd[1542]: Accepted publickey for core from 10.0.0.1 port 50294 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:07:00.469018 sshd[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:07:00.476887 systemd-logind[1431]: New session 1 of user core. Jan 13 21:07:00.477829 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:07:00.496486 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:07:00.506211 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:07:00.508190 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:07:00.514601 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:07:00.582821 systemd[1546]: Queued start job for default target default.target. Jan 13 21:07:00.591098 systemd[1546]: Created slice app.slice - User Application Slice. Jan 13 21:07:00.591142 systemd[1546]: Reached target paths.target - Paths. Jan 13 21:07:00.591154 systemd[1546]: Reached target timers.target - Timers. Jan 13 21:07:00.592342 systemd[1546]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:07:00.601597 systemd[1546]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:07:00.601661 systemd[1546]: Reached target sockets.target - Sockets. Jan 13 21:07:00.601673 systemd[1546]: Reached target basic.target - Basic System. Jan 13 21:07:00.601708 systemd[1546]: Reached target default.target - Main User Target. Jan 13 21:07:00.601733 systemd[1546]: Startup finished in 82ms. Jan 13 21:07:00.601934 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:07:00.603144 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:07:00.666666 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:50306.service - OpenSSH per-connection server daemon (10.0.0.1:50306). Jan 13 21:07:00.706686 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 50306 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:07:00.708206 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:07:00.712228 systemd-logind[1431]: New session 2 of user core. Jan 13 21:07:00.721346 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:07:00.770977 sshd[1557]: pam_unix(sshd:session): session closed for user core Jan 13 21:07:00.782455 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:50306.service: Deactivated successfully. Jan 13 21:07:00.783771 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:07:00.785164 systemd-logind[1431]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:07:00.786231 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:50312.service - OpenSSH per-connection server daemon (10.0.0.1:50312). Jan 13 21:07:00.786767 systemd-logind[1431]: Removed session 2. Jan 13 21:07:00.817813 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 50312 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:07:00.818930 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:07:00.822194 systemd-logind[1431]: New session 3 of user core. Jan 13 21:07:00.828301 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:07:00.874522 sshd[1564]: pam_unix(sshd:session): session closed for user core Jan 13 21:07:00.886524 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:50312.service: Deactivated successfully. Jan 13 21:07:00.887884 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:07:00.889031 systemd-logind[1431]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:07:00.890274 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:50322.service - OpenSSH per-connection server daemon (10.0.0.1:50322). Jan 13 21:07:00.890976 systemd-logind[1431]: Removed session 3. Jan 13 21:07:00.926693 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 50322 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:07:00.927937 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:07:00.931656 systemd-logind[1431]: New session 4 of user core. Jan 13 21:07:00.946304 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:07:00.996801 sshd[1571]: pam_unix(sshd:session): session closed for user core Jan 13 21:07:01.010448 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:50322.service: Deactivated successfully. Jan 13 21:07:01.013422 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:07:01.014491 systemd-logind[1431]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:07:01.015560 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:50326.service - OpenSSH per-connection server daemon (10.0.0.1:50326). Jan 13 21:07:01.016288 systemd-logind[1431]: Removed session 4. Jan 13 21:07:01.046847 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 50326 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:07:01.048034 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:07:01.051344 systemd-logind[1431]: New session 5 of user core. Jan 13 21:07:01.057300 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:07:01.114019 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:07:01.114305 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:07:01.410396 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:07:01.410527 (dockerd)[1599]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:07:01.660367 dockerd[1599]: time="2025-01-13T21:07:01.660304220Z" level=info msg="Starting up" Jan 13 21:07:01.795497 dockerd[1599]: time="2025-01-13T21:07:01.795137230Z" level=info msg="Loading containers: start." Jan 13 21:07:01.872221 kernel: Initializing XFRM netlink socket Jan 13 21:07:01.929410 systemd-networkd[1371]: docker0: Link UP Jan 13 21:07:01.945349 dockerd[1599]: time="2025-01-13T21:07:01.945247543Z" level=info msg="Loading containers: done." Jan 13 21:07:01.959259 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1098179501-merged.mount: Deactivated successfully. Jan 13 21:07:01.960705 dockerd[1599]: time="2025-01-13T21:07:01.960348716Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:07:01.960705 dockerd[1599]: time="2025-01-13T21:07:01.960438674Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:07:01.960705 dockerd[1599]: time="2025-01-13T21:07:01.960530267Z" level=info msg="Daemon has completed initialization" Jan 13 21:07:01.985199 dockerd[1599]: time="2025-01-13T21:07:01.985065633Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:07:01.985445 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:07:02.770830 containerd[1444]: time="2025-01-13T21:07:02.770786981Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 21:07:03.474350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3222064559.mount: Deactivated successfully. Jan 13 21:07:05.551353 containerd[1444]: time="2025-01-13T21:07:05.551293219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:05.551851 containerd[1444]: time="2025-01-13T21:07:05.551815407Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864012" Jan 13 21:07:05.554164 containerd[1444]: time="2025-01-13T21:07:05.552570506Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:05.556104 containerd[1444]: time="2025-01-13T21:07:05.556071501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:05.557206 containerd[1444]: time="2025-01-13T21:07:05.557166050Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 2.786280328s" Jan 13 21:07:05.557279 containerd[1444]: time="2025-01-13T21:07:05.557209759Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Jan 13 21:07:05.575529 containerd[1444]: time="2025-01-13T21:07:05.575423486Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 21:07:06.842293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:07:06.851330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:07:06.946777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:07:06.950309 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:07:06.988138 kubelet[1823]: E0113 21:07:06.988090 1823 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:07:06.991036 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:07:06.991189 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:07:08.544150 containerd[1444]: time="2025-01-13T21:07:08.544097227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:08.544683 containerd[1444]: time="2025-01-13T21:07:08.544639707Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900696" Jan 13 21:07:08.545421 containerd[1444]: time="2025-01-13T21:07:08.545374039Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:08.548996 containerd[1444]: time="2025-01-13T21:07:08.548952705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:08.550140 containerd[1444]: time="2025-01-13T21:07:08.550113999Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 2.974652904s" Jan 13 21:07:08.550140 containerd[1444]: time="2025-01-13T21:07:08.550143158Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Jan 13 21:07:08.568657 containerd[1444]: time="2025-01-13T21:07:08.568622671Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 21:07:10.553772 containerd[1444]: time="2025-01-13T21:07:10.553723798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:10.554725 containerd[1444]: time="2025-01-13T21:07:10.554698273Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164334" Jan 13 21:07:10.555519 containerd[1444]: time="2025-01-13T21:07:10.555290917Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:10.558748 containerd[1444]: time="2025-01-13T21:07:10.558711889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:10.559985 containerd[1444]: time="2025-01-13T21:07:10.559914360Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 1.991252941s" Jan 13 21:07:10.559985 containerd[1444]: time="2025-01-13T21:07:10.559952080Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Jan 13 21:07:10.578400 containerd[1444]: time="2025-01-13T21:07:10.578355108Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 21:07:11.692614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3677932491.mount: Deactivated successfully. Jan 13 21:07:12.119454 containerd[1444]: time="2025-01-13T21:07:12.119407617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:12.120358 containerd[1444]: time="2025-01-13T21:07:12.120198968Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662013" Jan 13 21:07:12.121286 containerd[1444]: time="2025-01-13T21:07:12.121251424Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:12.123426 containerd[1444]: time="2025-01-13T21:07:12.123379478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:12.124211 containerd[1444]: time="2025-01-13T21:07:12.123991216Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.545598744s" Jan 13 21:07:12.124211 containerd[1444]: time="2025-01-13T21:07:12.124032982Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Jan 13 21:07:12.142544 containerd[1444]: time="2025-01-13T21:07:12.142513335Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:07:12.756693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount670257160.mount: Deactivated successfully. Jan 13 21:07:14.070816 containerd[1444]: time="2025-01-13T21:07:14.070601587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:14.071659 containerd[1444]: time="2025-01-13T21:07:14.071390411Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 13 21:07:14.072281 containerd[1444]: time="2025-01-13T21:07:14.072236999Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:14.075109 containerd[1444]: time="2025-01-13T21:07:14.075055229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:14.076320 containerd[1444]: time="2025-01-13T21:07:14.076291612Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.933739667s" Jan 13 21:07:14.076371 containerd[1444]: time="2025-01-13T21:07:14.076327910Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 21:07:14.094717 containerd[1444]: time="2025-01-13T21:07:14.094638846Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:07:14.576419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1305852114.mount: Deactivated successfully. Jan 13 21:07:14.580493 containerd[1444]: time="2025-01-13T21:07:14.580445497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:14.581515 containerd[1444]: time="2025-01-13T21:07:14.581489162Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 13 21:07:14.582671 containerd[1444]: time="2025-01-13T21:07:14.582630285Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:14.584586 containerd[1444]: time="2025-01-13T21:07:14.584533649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:14.585623 containerd[1444]: time="2025-01-13T21:07:14.585494365Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 490.82154ms" Jan 13 21:07:14.585623 containerd[1444]: time="2025-01-13T21:07:14.585525226Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 21:07:14.602979 containerd[1444]: time="2025-01-13T21:07:14.602945083Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 21:07:15.167518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3218097122.mount: Deactivated successfully. Jan 13 21:07:17.092438 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:07:17.105418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:07:17.194064 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:07:17.197480 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:07:17.232720 kubelet[1976]: E0113 21:07:17.232631 1976 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:07:17.235317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:07:17.235467 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:07:18.790861 containerd[1444]: time="2025-01-13T21:07:18.790809874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:18.791852 containerd[1444]: time="2025-01-13T21:07:18.791531448Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jan 13 21:07:18.793221 containerd[1444]: time="2025-01-13T21:07:18.792237149Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:18.795267 containerd[1444]: time="2025-01-13T21:07:18.795216732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:18.796484 containerd[1444]: time="2025-01-13T21:07:18.796450318Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.193471974s" Jan 13 21:07:18.796528 containerd[1444]: time="2025-01-13T21:07:18.796485705Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 13 21:07:23.387758 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:07:23.399486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:07:23.416452 systemd[1]: Reloading requested from client PID 2071 ('systemctl') (unit session-5.scope)... Jan 13 21:07:23.416472 systemd[1]: Reloading... Jan 13 21:07:23.483235 zram_generator::config[2108]: No configuration found. Jan 13 21:07:23.605054 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:07:23.660490 systemd[1]: Reloading finished in 243 ms. Jan 13 21:07:23.700965 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:07:23.703157 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:07:23.703410 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:07:23.718504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:07:23.811720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:07:23.815778 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:07:23.849518 kubelet[2157]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:07:23.849518 kubelet[2157]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:07:23.849518 kubelet[2157]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:07:23.849834 kubelet[2157]: I0113 21:07:23.849553 2157 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:07:24.838141 kubelet[2157]: I0113 21:07:24.838082 2157 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:07:24.838141 kubelet[2157]: I0113 21:07:24.838114 2157 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:07:24.838380 kubelet[2157]: I0113 21:07:24.838353 2157 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:07:24.876772 kubelet[2157]: I0113 21:07:24.876716 2157 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:07:24.878761 kubelet[2157]: E0113 21:07:24.878732 2157 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.15:6443: connect: connection refused Jan 13 21:07:24.886429 kubelet[2157]: I0113 21:07:24.886402 2157 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:07:24.944332 kubelet[2157]: I0113 21:07:24.944265 2157 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:07:24.944506 kubelet[2157]: I0113 21:07:24.944332 2157 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:07:24.944587 kubelet[2157]: I0113 21:07:24.944572 2157 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:07:24.944587 kubelet[2157]: I0113 21:07:24.944582 2157 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:07:24.944878 kubelet[2157]: I0113 21:07:24.944846 2157 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:07:24.945861 kubelet[2157]: I0113 21:07:24.945834 2157 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:07:24.945861 kubelet[2157]: I0113 21:07:24.945859 2157 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:07:24.946181 kubelet[2157]: I0113 21:07:24.946160 2157 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:07:24.946349 kubelet[2157]: I0113 21:07:24.946331 2157 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:07:24.946699 kubelet[2157]: W0113 21:07:24.946646 2157 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 13 21:07:24.946727 kubelet[2157]: E0113 21:07:24.946712 2157 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 13 21:07:24.946986 kubelet[2157]: W0113 21:07:24.946941 2157 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 13 21:07:24.947015 kubelet[2157]: E0113 21:07:24.946992 2157 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 13 21:07:24.948079 kubelet[2157]: I0113 21:07:24.948053 2157 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:07:24.948457 kubelet[2157]: I0113 21:07:24.948439 2157 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:07:24.948594 kubelet[2157]: W0113 21:07:24.948577 2157 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:07:24.949447 kubelet[2157]: I0113 21:07:24.949427 2157 server.go:1264] "Started kubelet" Jan 13 21:07:24.950655 kubelet[2157]: I0113 21:07:24.950627 2157 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:07:24.953854 kubelet[2157]: E0113 21:07:24.952556 2157 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5c9f283d4165 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:07:24.949406053 +0000 UTC m=+1.130639325,LastTimestamp:2025-01-13 21:07:24.949406053 +0000 UTC m=+1.130639325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:07:24.953854 kubelet[2157]: I0113 21:07:24.953734 2157 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:07:24.954317 kubelet[2157]: I0113 21:07:24.954255 2157 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:07:24.954590 kubelet[2157]: I0113 21:07:24.954570 2157 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:07:24.955683 kubelet[2157]: I0113 21:07:24.955665 2157 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:07:24.956396 kubelet[2157]: I0113 21:07:24.956370 2157 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:07:24.956804 kubelet[2157]: I0113 21:07:24.956779 2157 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:07:24.957707 kubelet[2157]: E0113 21:07:24.957676 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" Jan 13 21:07:24.958056 kubelet[2157]: W0113 21:07:24.958021 2157 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 13 21:07:24.958142 kubelet[2157]: E0113 21:07:24.958132 2157 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 13 21:07:24.958402 kubelet[2157]: I0113 21:07:24.958390 2157 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:07:24.959939 kubelet[2157]: E0113 21:07:24.959911 2157 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:07:24.960536 kubelet[2157]: I0113 21:07:24.960509 2157 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:07:24.960536 kubelet[2157]: I0113 21:07:24.960531 2157 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:07:24.960619 kubelet[2157]: I0113 21:07:24.960604 2157 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:07:24.965606 kubelet[2157]: I0113 21:07:24.965494 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:07:24.966902 kubelet[2157]: I0113 21:07:24.966485 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:07:24.966902 kubelet[2157]: I0113 21:07:24.966579 2157 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:07:24.966902 kubelet[2157]: I0113 21:07:24.966593 2157 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:07:24.966902 kubelet[2157]: E0113 21:07:24.966628 2157 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:07:24.970762 kubelet[2157]: W0113 21:07:24.970719 2157 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 13 21:07:24.970867 kubelet[2157]: E0113 21:07:24.970847 2157 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 13 21:07:24.972979 kubelet[2157]: I0113 21:07:24.972949 2157 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:07:24.972979 kubelet[2157]: I0113 21:07:24.972964 2157 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:07:24.973069 kubelet[2157]: I0113 21:07:24.972992 2157 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:07:24.975086 kubelet[2157]: I0113 21:07:24.975048 2157 policy_none.go:49] "None policy: Start" Jan 13 21:07:24.975619 kubelet[2157]: I0113 21:07:24.975590 2157 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:07:24.975619 kubelet[2157]: I0113 21:07:24.975609 2157 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:07:24.981859 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:07:24.992849 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:07:25.001672 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:07:25.002650 kubelet[2157]: I0113 21:07:25.002583 2157 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:07:25.003248 kubelet[2157]: I0113 21:07:25.002750 2157 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:07:25.003248 kubelet[2157]: I0113 21:07:25.002843 2157 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:07:25.004115 kubelet[2157]: E0113 21:07:25.004086 2157 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 21:07:25.058320 kubelet[2157]: I0113 21:07:25.058283 2157 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:07:25.058650 kubelet[2157]: E0113 21:07:25.058622 2157 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 13 21:07:25.066776 kubelet[2157]: I0113 21:07:25.066733 2157 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:07:25.067865 kubelet[2157]: I0113 21:07:25.067774 2157 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:07:25.068622 kubelet[2157]: I0113 21:07:25.068583 2157 topology_manager.go:215] "Topology Admit Handler" podUID="0826945936ddb19870c287f9904f0737" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:07:25.073391 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Jan 13 21:07:25.097450 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Jan 13 21:07:25.110781 systemd[1]: Created slice kubepods-burstable-pod0826945936ddb19870c287f9904f0737.slice - libcontainer container kubepods-burstable-pod0826945936ddb19870c287f9904f0737.slice. Jan 13 21:07:25.158626 kubelet[2157]: E0113 21:07:25.158585 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" Jan 13 21:07:25.160739 kubelet[2157]: I0113 21:07:25.160694 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:07:25.160739 kubelet[2157]: I0113 21:07:25.160731 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0826945936ddb19870c287f9904f0737-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0826945936ddb19870c287f9904f0737\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:07:25.160861 kubelet[2157]: I0113 21:07:25.160752 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:07:25.160861 kubelet[2157]: I0113 21:07:25.160775 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:07:25.160861 kubelet[2157]: I0113 21:07:25.160792 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:07:25.160861 kubelet[2157]: I0113 21:07:25.160806 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0826945936ddb19870c287f9904f0737-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0826945936ddb19870c287f9904f0737\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:07:25.160861 kubelet[2157]: I0113 21:07:25.160820 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0826945936ddb19870c287f9904f0737-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0826945936ddb19870c287f9904f0737\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:07:25.160990 kubelet[2157]: I0113 21:07:25.160836 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:07:25.160990 kubelet[2157]: I0113 21:07:25.160852 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:07:25.259978 kubelet[2157]: I0113 21:07:25.259947 2157 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:07:25.260285 kubelet[2157]: E0113 21:07:25.260256 2157 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 13 21:07:25.395198 kubelet[2157]: E0113 21:07:25.395099 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:25.395784 containerd[1444]: time="2025-01-13T21:07:25.395719855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Jan 13 21:07:25.410126 kubelet[2157]: E0113 21:07:25.410096 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:25.410622 containerd[1444]: time="2025-01-13T21:07:25.410595785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Jan 13 21:07:25.413207 kubelet[2157]: E0113 21:07:25.413162 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:25.413491 containerd[1444]: time="2025-01-13T21:07:25.413467210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0826945936ddb19870c287f9904f0737,Namespace:kube-system,Attempt:0,}" Jan 13 21:07:25.560001 kubelet[2157]: E0113 21:07:25.559955 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" Jan 13 21:07:25.661551 kubelet[2157]: I0113 21:07:25.661444 2157 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:07:25.661806 kubelet[2157]: E0113 21:07:25.661752 2157 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 13 21:07:25.950618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2805244568.mount: Deactivated successfully. Jan 13 21:07:25.956590 containerd[1444]: time="2025-01-13T21:07:25.956537235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:07:25.957420 containerd[1444]: time="2025-01-13T21:07:25.957388352Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:07:25.957550 containerd[1444]: time="2025-01-13T21:07:25.957525012Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:07:25.958496 containerd[1444]: time="2025-01-13T21:07:25.958458117Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:07:25.959145 containerd[1444]: time="2025-01-13T21:07:25.959114982Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:07:25.959903 containerd[1444]: time="2025-01-13T21:07:25.959861835Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:07:25.960330 containerd[1444]: time="2025-01-13T21:07:25.960298731Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 13 21:07:25.963736 containerd[1444]: time="2025-01-13T21:07:25.963701560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:07:25.965125 containerd[1444]: time="2025-01-13T21:07:25.965087639Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.565478ms" Jan 13 21:07:25.965802 containerd[1444]: time="2025-01-13T21:07:25.965691672Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 569.893748ms" Jan 13 21:07:25.967716 containerd[1444]: time="2025-01-13T21:07:25.967684584Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.019369ms" Jan 13 21:07:26.077296 containerd[1444]: time="2025-01-13T21:07:26.076985316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:07:26.077621 containerd[1444]: time="2025-01-13T21:07:26.077406063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:07:26.077621 containerd[1444]: time="2025-01-13T21:07:26.077579561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:26.077987 containerd[1444]: time="2025-01-13T21:07:26.077912439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:26.078845 containerd[1444]: time="2025-01-13T21:07:26.078767731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:07:26.078845 containerd[1444]: time="2025-01-13T21:07:26.078827403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:07:26.079464 containerd[1444]: time="2025-01-13T21:07:26.079260229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:07:26.079464 containerd[1444]: time="2025-01-13T21:07:26.079308102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:07:26.079464 containerd[1444]: time="2025-01-13T21:07:26.079318981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:26.079464 containerd[1444]: time="2025-01-13T21:07:26.079394452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:26.079464 containerd[1444]: time="2025-01-13T21:07:26.078844441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:26.079464 containerd[1444]: time="2025-01-13T21:07:26.078914992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:26.101374 systemd[1]: Started cri-containerd-a0940d222c98f6845d1345f2afd6944b81e5f0135d68b752bd64c78d20f5fe8b.scope - libcontainer container a0940d222c98f6845d1345f2afd6944b81e5f0135d68b752bd64c78d20f5fe8b. Jan 13 21:07:26.105566 systemd[1]: Started cri-containerd-004c7735595f5edfbe2ed9671fa5a57f5a6e7b2e43872798de426afef786a97a.scope - libcontainer container 004c7735595f5edfbe2ed9671fa5a57f5a6e7b2e43872798de426afef786a97a. Jan 13 21:07:26.106558 systemd[1]: Started cri-containerd-6934bee4812ae16c16ceaf10bf752d40216e8e9860b51c8fe816aa4dbcd7b21c.scope - libcontainer container 6934bee4812ae16c16ceaf10bf752d40216e8e9860b51c8fe816aa4dbcd7b21c. Jan 13 21:07:26.133249 containerd[1444]: time="2025-01-13T21:07:26.133218905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0940d222c98f6845d1345f2afd6944b81e5f0135d68b752bd64c78d20f5fe8b\"" Jan 13 21:07:26.134900 kubelet[2157]: E0113 21:07:26.134879 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:26.141864 containerd[1444]: time="2025-01-13T21:07:26.137959531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0826945936ddb19870c287f9904f0737,Namespace:kube-system,Attempt:0,} returns sandbox id \"6934bee4812ae16c16ceaf10bf752d40216e8e9860b51c8fe816aa4dbcd7b21c\"" Jan 13 21:07:26.141864 containerd[1444]: time="2025-01-13T21:07:26.139388718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"004c7735595f5edfbe2ed9671fa5a57f5a6e7b2e43872798de426afef786a97a\"" Jan 13 21:07:26.141976 kubelet[2157]: E0113 21:07:26.138996 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:26.141976 kubelet[2157]: E0113 21:07:26.139844 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:26.142302 containerd[1444]: time="2025-01-13T21:07:26.142272809Z" level=info msg="CreateContainer within sandbox \"a0940d222c98f6845d1345f2afd6944b81e5f0135d68b752bd64c78d20f5fe8b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:07:26.147092 containerd[1444]: time="2025-01-13T21:07:26.147067069Z" level=info msg="CreateContainer within sandbox \"6934bee4812ae16c16ceaf10bf752d40216e8e9860b51c8fe816aa4dbcd7b21c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:07:26.147829 containerd[1444]: time="2025-01-13T21:07:26.147805220Z" level=info msg="CreateContainer within sandbox \"004c7735595f5edfbe2ed9671fa5a57f5a6e7b2e43872798de426afef786a97a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:07:26.164078 containerd[1444]: time="2025-01-13T21:07:26.163880475Z" level=info msg="CreateContainer within sandbox \"a0940d222c98f6845d1345f2afd6944b81e5f0135d68b752bd64c78d20f5fe8b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"508a1da8ec32579ae23adef47155f63c8927cfb4df663b295245f6ba989b3468\"" Jan 13 21:07:26.165013 containerd[1444]: time="2025-01-13T21:07:26.164635264Z" level=info msg="StartContainer for \"508a1da8ec32579ae23adef47155f63c8927cfb4df663b295245f6ba989b3468\"" Jan 13 21:07:26.169449 containerd[1444]: time="2025-01-13T21:07:26.169413166Z" level=info msg="CreateContainer within sandbox \"6934bee4812ae16c16ceaf10bf752d40216e8e9860b51c8fe816aa4dbcd7b21c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"409f5a88f6ee40bad25f8329492cfe9db9160695408fc7c5ecc1653572436a94\"" Jan 13 21:07:26.169930 containerd[1444]: time="2025-01-13T21:07:26.169906386Z" level=info msg="StartContainer for \"409f5a88f6ee40bad25f8329492cfe9db9160695408fc7c5ecc1653572436a94\"" Jan 13 21:07:26.170021 containerd[1444]: time="2025-01-13T21:07:26.169904467Z" level=info msg="CreateContainer within sandbox \"004c7735595f5edfbe2ed9671fa5a57f5a6e7b2e43872798de426afef786a97a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8f05bbbf172f723529970c5f5c953ca8a84ae40189a23fe61f03039663066ad0\"" Jan 13 21:07:26.170280 containerd[1444]: time="2025-01-13T21:07:26.170253744Z" level=info msg="StartContainer for \"8f05bbbf172f723529970c5f5c953ca8a84ae40189a23fe61f03039663066ad0\"" Jan 13 21:07:26.194387 systemd[1]: Started cri-containerd-508a1da8ec32579ae23adef47155f63c8927cfb4df663b295245f6ba989b3468.scope - libcontainer container 508a1da8ec32579ae23adef47155f63c8927cfb4df663b295245f6ba989b3468. Jan 13 21:07:26.198128 systemd[1]: Started cri-containerd-409f5a88f6ee40bad25f8329492cfe9db9160695408fc7c5ecc1653572436a94.scope - libcontainer container 409f5a88f6ee40bad25f8329492cfe9db9160695408fc7c5ecc1653572436a94. Jan 13 21:07:26.198951 systemd[1]: Started cri-containerd-8f05bbbf172f723529970c5f5c953ca8a84ae40189a23fe61f03039663066ad0.scope - libcontainer container 8f05bbbf172f723529970c5f5c953ca8a84ae40189a23fe61f03039663066ad0. Jan 13 21:07:26.229480 containerd[1444]: time="2025-01-13T21:07:26.228073670Z" level=info msg="StartContainer for \"508a1da8ec32579ae23adef47155f63c8927cfb4df663b295245f6ba989b3468\" returns successfully" Jan 13 21:07:26.257848 containerd[1444]: time="2025-01-13T21:07:26.257731122Z" level=info msg="StartContainer for \"409f5a88f6ee40bad25f8329492cfe9db9160695408fc7c5ecc1653572436a94\" returns successfully" Jan 13 21:07:26.257848 containerd[1444]: time="2025-01-13T21:07:26.257808593Z" level=info msg="StartContainer for \"8f05bbbf172f723529970c5f5c953ca8a84ae40189a23fe61f03039663066ad0\" returns successfully" Jan 13 21:07:26.345248 kubelet[2157]: W0113 21:07:26.342952 2157 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 13 21:07:26.345248 kubelet[2157]: E0113 21:07:26.343024 2157 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 13 21:07:26.361272 kubelet[2157]: E0113 21:07:26.360625 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" Jan 13 21:07:26.394479 kubelet[2157]: W0113 21:07:26.394440 2157 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 13 21:07:26.394637 kubelet[2157]: E0113 21:07:26.394625 2157 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jan 13 21:07:26.463841 kubelet[2157]: I0113 21:07:26.463310 2157 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:07:26.976234 kubelet[2157]: E0113 21:07:26.975168 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:26.976698 kubelet[2157]: E0113 21:07:26.976684 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:26.978563 kubelet[2157]: E0113 21:07:26.978542 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:27.982292 kubelet[2157]: E0113 21:07:27.982224 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:28.633903 kubelet[2157]: E0113 21:07:28.633869 2157 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 21:07:28.735403 kubelet[2157]: I0113 21:07:28.735360 2157 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:07:28.949073 kubelet[2157]: I0113 21:07:28.948978 2157 apiserver.go:52] "Watching apiserver" Jan 13 21:07:28.957302 kubelet[2157]: I0113 21:07:28.957277 2157 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:07:30.378422 kubelet[2157]: E0113 21:07:30.378351 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:30.805549 systemd[1]: Reloading requested from client PID 2434 ('systemctl') (unit session-5.scope)... Jan 13 21:07:30.805564 systemd[1]: Reloading... Jan 13 21:07:30.863216 zram_generator::config[2476]: No configuration found. Jan 13 21:07:30.926845 kubelet[2157]: E0113 21:07:30.926768 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:30.938262 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:07:30.985152 kubelet[2157]: E0113 21:07:30.985116 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:30.985520 kubelet[2157]: E0113 21:07:30.985483 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:31.000188 systemd[1]: Reloading finished in 194 ms. Jan 13 21:07:31.035479 kubelet[2157]: I0113 21:07:31.035424 2157 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:07:31.035523 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:07:31.050010 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:07:31.052228 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:07:31.052282 systemd[1]: kubelet.service: Consumed 1.455s CPU time, 116.6M memory peak, 0B memory swap peak. Jan 13 21:07:31.062414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:07:31.150142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:07:31.153566 (kubelet)[2515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:07:31.186708 kubelet[2515]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:07:31.186708 kubelet[2515]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:07:31.186708 kubelet[2515]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:07:31.187035 kubelet[2515]: I0113 21:07:31.186744 2515 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:07:31.192032 kubelet[2515]: I0113 21:07:31.191270 2515 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:07:31.192032 kubelet[2515]: I0113 21:07:31.191294 2515 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:07:31.192032 kubelet[2515]: I0113 21:07:31.191470 2515 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:07:31.192928 kubelet[2515]: I0113 21:07:31.192902 2515 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:07:31.195108 kubelet[2515]: I0113 21:07:31.194538 2515 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:07:31.201872 kubelet[2515]: I0113 21:07:31.201849 2515 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:07:31.203285 kubelet[2515]: I0113 21:07:31.202086 2515 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:07:31.203402 kubelet[2515]: I0113 21:07:31.203222 2515 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:07:31.203402 kubelet[2515]: I0113 21:07:31.203401 2515 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:07:31.203509 kubelet[2515]: I0113 21:07:31.203410 2515 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:07:31.203509 kubelet[2515]: I0113 21:07:31.203444 2515 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:07:31.203813 kubelet[2515]: I0113 21:07:31.203780 2515 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:07:31.203813 kubelet[2515]: I0113 21:07:31.203807 2515 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:07:31.203875 kubelet[2515]: I0113 21:07:31.203844 2515 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:07:31.203875 kubelet[2515]: I0113 21:07:31.203863 2515 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:07:31.205305 kubelet[2515]: I0113 21:07:31.205280 2515 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:07:31.205934 kubelet[2515]: I0113 21:07:31.205913 2515 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:07:31.210910 kubelet[2515]: I0113 21:07:31.206937 2515 server.go:1264] "Started kubelet" Jan 13 21:07:31.210910 kubelet[2515]: I0113 21:07:31.207010 2515 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:07:31.210910 kubelet[2515]: I0113 21:07:31.207299 2515 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:07:31.210910 kubelet[2515]: I0113 21:07:31.207531 2515 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:07:31.210910 kubelet[2515]: I0113 21:07:31.209484 2515 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:07:31.211405 kubelet[2515]: I0113 21:07:31.211359 2515 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:07:31.212346 kubelet[2515]: E0113 21:07:31.212209 2515 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:07:31.212346 kubelet[2515]: I0113 21:07:31.212253 2515 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:07:31.212545 kubelet[2515]: I0113 21:07:31.212531 2515 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:07:31.212904 kubelet[2515]: I0113 21:07:31.212889 2515 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:07:31.221216 kubelet[2515]: I0113 21:07:31.218950 2515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:07:31.221216 kubelet[2515]: I0113 21:07:31.219853 2515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:07:31.221216 kubelet[2515]: I0113 21:07:31.219883 2515 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:07:31.221216 kubelet[2515]: I0113 21:07:31.219897 2515 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:07:31.221216 kubelet[2515]: E0113 21:07:31.219935 2515 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:07:31.229866 kubelet[2515]: E0113 21:07:31.229718 2515 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:07:31.231152 kubelet[2515]: I0113 21:07:31.230721 2515 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:07:31.231152 kubelet[2515]: I0113 21:07:31.230853 2515 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:07:31.232040 kubelet[2515]: I0113 21:07:31.232019 2515 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:07:31.257669 kubelet[2515]: I0113 21:07:31.257637 2515 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:07:31.257669 kubelet[2515]: I0113 21:07:31.257665 2515 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:07:31.257789 kubelet[2515]: I0113 21:07:31.257685 2515 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:07:31.257837 kubelet[2515]: I0113 21:07:31.257823 2515 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:07:31.257874 kubelet[2515]: I0113 21:07:31.257837 2515 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:07:31.257874 kubelet[2515]: I0113 21:07:31.257853 2515 policy_none.go:49] "None policy: Start" Jan 13 21:07:31.258438 kubelet[2515]: I0113 21:07:31.258403 2515 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:07:31.258438 kubelet[2515]: I0113 21:07:31.258431 2515 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:07:31.258589 kubelet[2515]: I0113 21:07:31.258575 2515 state_mem.go:75] "Updated machine memory state" Jan 13 21:07:31.262850 kubelet[2515]: I0113 21:07:31.262601 2515 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:07:31.262850 kubelet[2515]: I0113 21:07:31.262782 2515 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:07:31.262966 kubelet[2515]: I0113 21:07:31.262918 2515 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:07:31.316321 kubelet[2515]: I0113 21:07:31.316099 2515 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:07:31.320549 kubelet[2515]: I0113 21:07:31.320436 2515 topology_manager.go:215] "Topology Admit Handler" podUID="0826945936ddb19870c287f9904f0737" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:07:31.320632 kubelet[2515]: I0113 21:07:31.320546 2515 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:07:31.321094 kubelet[2515]: I0113 21:07:31.320697 2515 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:07:31.326018 kubelet[2515]: I0113 21:07:31.325999 2515 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 21:07:31.326190 kubelet[2515]: I0113 21:07:31.326110 2515 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:07:31.327286 kubelet[2515]: E0113 21:07:31.327220 2515 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 13 21:07:31.327537 kubelet[2515]: E0113 21:07:31.327457 2515 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 13 21:07:31.514595 kubelet[2515]: I0113 21:07:31.514559 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0826945936ddb19870c287f9904f0737-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0826945936ddb19870c287f9904f0737\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:07:31.514595 kubelet[2515]: I0113 21:07:31.514598 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:07:31.514776 kubelet[2515]: I0113 21:07:31.514625 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:07:31.514776 kubelet[2515]: I0113 21:07:31.514646 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0826945936ddb19870c287f9904f0737-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0826945936ddb19870c287f9904f0737\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:07:31.514776 kubelet[2515]: I0113 21:07:31.514672 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0826945936ddb19870c287f9904f0737-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0826945936ddb19870c287f9904f0737\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:07:31.514776 kubelet[2515]: I0113 21:07:31.514687 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:07:31.514776 kubelet[2515]: I0113 21:07:31.514701 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:07:31.514880 kubelet[2515]: I0113 21:07:31.514716 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:07:31.514880 kubelet[2515]: I0113 21:07:31.514730 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:07:31.626901 kubelet[2515]: E0113 21:07:31.626774 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:31.629833 kubelet[2515]: E0113 21:07:31.628454 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:31.630210 kubelet[2515]: E0113 21:07:31.630191 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:32.204909 kubelet[2515]: I0113 21:07:32.204869 2515 apiserver.go:52] "Watching apiserver" Jan 13 21:07:32.213720 kubelet[2515]: I0113 21:07:32.213673 2515 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:07:32.245727 kubelet[2515]: E0113 21:07:32.245697 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:32.250999 kubelet[2515]: E0113 21:07:32.250959 2515 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 13 21:07:32.252683 kubelet[2515]: E0113 21:07:32.252604 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:32.254315 kubelet[2515]: E0113 21:07:32.253827 2515 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:07:32.254315 kubelet[2515]: E0113 21:07:32.254259 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:32.276935 kubelet[2515]: I0113 21:07:32.276718 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.276477065 podStartE2EDuration="1.276477065s" podCreationTimestamp="2025-01-13 21:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:07:32.275195857 +0000 UTC m=+1.118872532" watchObservedRunningTime="2025-01-13 21:07:32.276477065 +0000 UTC m=+1.120153700" Jan 13 21:07:32.277867 kubelet[2515]: I0113 21:07:32.277608 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.277594787 podStartE2EDuration="2.277594787s" podCreationTimestamp="2025-01-13 21:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:07:32.264954835 +0000 UTC m=+1.108631470" watchObservedRunningTime="2025-01-13 21:07:32.277594787 +0000 UTC m=+1.121271502" Jan 13 21:07:32.282822 kubelet[2515]: I0113 21:07:32.282760 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.282749859 podStartE2EDuration="2.282749859s" podCreationTimestamp="2025-01-13 21:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:07:32.282595453 +0000 UTC m=+1.126272128" watchObservedRunningTime="2025-01-13 21:07:32.282749859 +0000 UTC m=+1.126426534" Jan 13 21:07:32.568271 sudo[1581]: pam_unix(sudo:session): session closed for user root Jan 13 21:07:32.569990 sshd[1578]: pam_unix(sshd:session): session closed for user core Jan 13 21:07:32.573494 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:50326.service: Deactivated successfully. Jan 13 21:07:32.575046 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:07:32.575763 systemd[1]: session-5.scope: Consumed 5.870s CPU time, 191.5M memory peak, 0B memory swap peak. Jan 13 21:07:32.576688 systemd-logind[1431]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:07:32.577982 systemd-logind[1431]: Removed session 5. Jan 13 21:07:33.247417 kubelet[2515]: E0113 21:07:33.247258 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:33.247940 kubelet[2515]: E0113 21:07:33.247907 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:37.002278 kubelet[2515]: E0113 21:07:37.000603 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:37.259322 kubelet[2515]: E0113 21:07:37.258919 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:39.097623 update_engine[1436]: I20250113 21:07:39.097520 1436 update_attempter.cc:509] Updating boot flags... Jan 13 21:07:39.116208 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2590) Jan 13 21:07:39.150253 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2590) Jan 13 21:07:41.615848 kubelet[2515]: E0113 21:07:41.615531 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:41.806549 kubelet[2515]: E0113 21:07:41.806514 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:42.266167 kubelet[2515]: E0113 21:07:42.266130 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:44.842373 kubelet[2515]: I0113 21:07:44.842103 2515 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:07:44.842719 containerd[1444]: time="2025-01-13T21:07:44.842414364Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:07:44.843032 kubelet[2515]: I0113 21:07:44.843005 2515 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:07:45.640646 kubelet[2515]: I0113 21:07:45.640598 2515 topology_manager.go:215] "Topology Admit Handler" podUID="4682656b-169c-46d3-ba28-ccf35bbd2edf" podNamespace="kube-system" podName="kube-proxy-47lw7" Jan 13 21:07:45.644103 kubelet[2515]: I0113 21:07:45.644051 2515 topology_manager.go:215] "Topology Admit Handler" podUID="dd920cd7-056b-4ab7-b618-a4b8210d03cf" podNamespace="kube-flannel" podName="kube-flannel-ds-m6t2c" Jan 13 21:07:45.650216 systemd[1]: Created slice kubepods-besteffort-pod4682656b_169c_46d3_ba28_ccf35bbd2edf.slice - libcontainer container kubepods-besteffort-pod4682656b_169c_46d3_ba28_ccf35bbd2edf.slice. Jan 13 21:07:45.663984 systemd[1]: Created slice kubepods-burstable-poddd920cd7_056b_4ab7_b618_a4b8210d03cf.slice - libcontainer container kubepods-burstable-poddd920cd7_056b_4ab7_b618_a4b8210d03cf.slice. Jan 13 21:07:45.715627 kubelet[2515]: I0113 21:07:45.715582 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4682656b-169c-46d3-ba28-ccf35bbd2edf-lib-modules\") pod \"kube-proxy-47lw7\" (UID: \"4682656b-169c-46d3-ba28-ccf35bbd2edf\") " pod="kube-system/kube-proxy-47lw7" Jan 13 21:07:45.715627 kubelet[2515]: I0113 21:07:45.715622 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dd920cd7-056b-4ab7-b618-a4b8210d03cf-run\") pod \"kube-flannel-ds-m6t2c\" (UID: \"dd920cd7-056b-4ab7-b618-a4b8210d03cf\") " pod="kube-flannel/kube-flannel-ds-m6t2c" Jan 13 21:07:45.715790 kubelet[2515]: I0113 21:07:45.715645 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd920cd7-056b-4ab7-b618-a4b8210d03cf-xtables-lock\") pod \"kube-flannel-ds-m6t2c\" (UID: \"dd920cd7-056b-4ab7-b618-a4b8210d03cf\") " pod="kube-flannel/kube-flannel-ds-m6t2c" Jan 13 21:07:45.715790 kubelet[2515]: I0113 21:07:45.715662 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5bcl\" (UniqueName: \"kubernetes.io/projected/dd920cd7-056b-4ab7-b618-a4b8210d03cf-kube-api-access-j5bcl\") pod \"kube-flannel-ds-m6t2c\" (UID: \"dd920cd7-056b-4ab7-b618-a4b8210d03cf\") " pod="kube-flannel/kube-flannel-ds-m6t2c" Jan 13 21:07:45.715790 kubelet[2515]: I0113 21:07:45.715682 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4682656b-169c-46d3-ba28-ccf35bbd2edf-kube-proxy\") pod \"kube-proxy-47lw7\" (UID: \"4682656b-169c-46d3-ba28-ccf35bbd2edf\") " pod="kube-system/kube-proxy-47lw7" Jan 13 21:07:45.715790 kubelet[2515]: I0113 21:07:45.715699 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnx7z\" (UniqueName: \"kubernetes.io/projected/4682656b-169c-46d3-ba28-ccf35bbd2edf-kube-api-access-lnx7z\") pod \"kube-proxy-47lw7\" (UID: \"4682656b-169c-46d3-ba28-ccf35bbd2edf\") " pod="kube-system/kube-proxy-47lw7" Jan 13 21:07:45.715790 kubelet[2515]: I0113 21:07:45.715714 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/dd920cd7-056b-4ab7-b618-a4b8210d03cf-cni-plugin\") pod \"kube-flannel-ds-m6t2c\" (UID: \"dd920cd7-056b-4ab7-b618-a4b8210d03cf\") " pod="kube-flannel/kube-flannel-ds-m6t2c" Jan 13 21:07:45.715922 kubelet[2515]: I0113 21:07:45.715730 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/dd920cd7-056b-4ab7-b618-a4b8210d03cf-flannel-cfg\") pod \"kube-flannel-ds-m6t2c\" (UID: \"dd920cd7-056b-4ab7-b618-a4b8210d03cf\") " pod="kube-flannel/kube-flannel-ds-m6t2c" Jan 13 21:07:45.715922 kubelet[2515]: I0113 21:07:45.715746 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4682656b-169c-46d3-ba28-ccf35bbd2edf-xtables-lock\") pod \"kube-proxy-47lw7\" (UID: \"4682656b-169c-46d3-ba28-ccf35bbd2edf\") " pod="kube-system/kube-proxy-47lw7" Jan 13 21:07:45.715922 kubelet[2515]: I0113 21:07:45.715763 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/dd920cd7-056b-4ab7-b618-a4b8210d03cf-cni\") pod \"kube-flannel-ds-m6t2c\" (UID: \"dd920cd7-056b-4ab7-b618-a4b8210d03cf\") " pod="kube-flannel/kube-flannel-ds-m6t2c" Jan 13 21:07:45.959621 kubelet[2515]: E0113 21:07:45.959516 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:45.960346 containerd[1444]: time="2025-01-13T21:07:45.960305397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-47lw7,Uid:4682656b-169c-46d3-ba28-ccf35bbd2edf,Namespace:kube-system,Attempt:0,}" Jan 13 21:07:45.966421 kubelet[2515]: E0113 21:07:45.966379 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:45.967574 containerd[1444]: time="2025-01-13T21:07:45.967540377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-m6t2c,Uid:dd920cd7-056b-4ab7-b618-a4b8210d03cf,Namespace:kube-flannel,Attempt:0,}" Jan 13 21:07:45.986789 containerd[1444]: time="2025-01-13T21:07:45.986646785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:07:45.986789 containerd[1444]: time="2025-01-13T21:07:45.986707386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:07:45.986789 containerd[1444]: time="2025-01-13T21:07:45.986718706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:45.986942 containerd[1444]: time="2025-01-13T21:07:45.986795348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:45.993940 containerd[1444]: time="2025-01-13T21:07:45.993631999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:07:45.993940 containerd[1444]: time="2025-01-13T21:07:45.993683000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:07:45.993940 containerd[1444]: time="2025-01-13T21:07:45.993707761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:45.993940 containerd[1444]: time="2025-01-13T21:07:45.993774122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:07:46.004353 systemd[1]: Started cri-containerd-be5b2a815a8e79d76a2ae824b43213ca378fbeb87cdeecb288f250be08eb753f.scope - libcontainer container be5b2a815a8e79d76a2ae824b43213ca378fbeb87cdeecb288f250be08eb753f. Jan 13 21:07:46.007777 systemd[1]: Started cri-containerd-886f24f92c07da32a3375bef6112796983206721d0932730bb1b2350a01b2262.scope - libcontainer container 886f24f92c07da32a3375bef6112796983206721d0932730bb1b2350a01b2262. Jan 13 21:07:46.037343 containerd[1444]: time="2025-01-13T21:07:46.037297929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-47lw7,Uid:4682656b-169c-46d3-ba28-ccf35bbd2edf,Namespace:kube-system,Attempt:0,} returns sandbox id \"be5b2a815a8e79d76a2ae824b43213ca378fbeb87cdeecb288f250be08eb753f\"" Jan 13 21:07:46.038078 kubelet[2515]: E0113 21:07:46.038049 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:46.041116 containerd[1444]: time="2025-01-13T21:07:46.041053598Z" level=info msg="CreateContainer within sandbox \"be5b2a815a8e79d76a2ae824b43213ca378fbeb87cdeecb288f250be08eb753f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:07:46.045726 containerd[1444]: time="2025-01-13T21:07:46.045683843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-m6t2c,Uid:dd920cd7-056b-4ab7-b618-a4b8210d03cf,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"886f24f92c07da32a3375bef6112796983206721d0932730bb1b2350a01b2262\"" Jan 13 21:07:46.046485 kubelet[2515]: E0113 21:07:46.046411 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:46.047268 containerd[1444]: time="2025-01-13T21:07:46.047235792Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 13 21:07:46.059163 containerd[1444]: time="2025-01-13T21:07:46.059108610Z" level=info msg="CreateContainer within sandbox \"be5b2a815a8e79d76a2ae824b43213ca378fbeb87cdeecb288f250be08eb753f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00170606aa1463238fc763f4757bcf1897eb4c2d60f0b248e1a4bf93288d8e80\"" Jan 13 21:07:46.059618 containerd[1444]: time="2025-01-13T21:07:46.059581259Z" level=info msg="StartContainer for \"00170606aa1463238fc763f4757bcf1897eb4c2d60f0b248e1a4bf93288d8e80\"" Jan 13 21:07:46.084371 systemd[1]: Started cri-containerd-00170606aa1463238fc763f4757bcf1897eb4c2d60f0b248e1a4bf93288d8e80.scope - libcontainer container 00170606aa1463238fc763f4757bcf1897eb4c2d60f0b248e1a4bf93288d8e80. Jan 13 21:07:46.107075 containerd[1444]: time="2025-01-13T21:07:46.107029212Z" level=info msg="StartContainer for \"00170606aa1463238fc763f4757bcf1897eb4c2d60f0b248e1a4bf93288d8e80\" returns successfully" Jan 13 21:07:46.277522 kubelet[2515]: E0113 21:07:46.276565 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:46.286885 kubelet[2515]: I0113 21:07:46.286434 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-47lw7" podStartSLOduration=1.2864179519999999 podStartE2EDuration="1.286417952s" podCreationTimestamp="2025-01-13 21:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:07:46.285794861 +0000 UTC m=+15.129471536" watchObservedRunningTime="2025-01-13 21:07:46.286417952 +0000 UTC m=+15.130094587" Jan 13 21:07:47.143400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount857349437.mount: Deactivated successfully. Jan 13 21:07:47.186909 containerd[1444]: time="2025-01-13T21:07:47.186852929Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:47.188125 containerd[1444]: time="2025-01-13T21:07:47.188079870Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Jan 13 21:07:47.189205 containerd[1444]: time="2025-01-13T21:07:47.188903245Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:47.192287 containerd[1444]: time="2025-01-13T21:07:47.191999139Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:47.193218 containerd[1444]: time="2025-01-13T21:07:47.192818314Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.145515921s" Jan 13 21:07:47.193218 containerd[1444]: time="2025-01-13T21:07:47.192849514Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Jan 13 21:07:47.196034 containerd[1444]: time="2025-01-13T21:07:47.195980369Z" level=info msg="CreateContainer within sandbox \"886f24f92c07da32a3375bef6112796983206721d0932730bb1b2350a01b2262\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 13 21:07:47.211987 containerd[1444]: time="2025-01-13T21:07:47.211934970Z" level=info msg="CreateContainer within sandbox \"886f24f92c07da32a3375bef6112796983206721d0932730bb1b2350a01b2262\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"860b639a877e55a2809e9688b99e09c49da7199cd8850d79ce83e288cc8b2a0b\"" Jan 13 21:07:47.212506 containerd[1444]: time="2025-01-13T21:07:47.212457699Z" level=info msg="StartContainer for \"860b639a877e55a2809e9688b99e09c49da7199cd8850d79ce83e288cc8b2a0b\"" Jan 13 21:07:47.241351 systemd[1]: Started cri-containerd-860b639a877e55a2809e9688b99e09c49da7199cd8850d79ce83e288cc8b2a0b.scope - libcontainer container 860b639a877e55a2809e9688b99e09c49da7199cd8850d79ce83e288cc8b2a0b. Jan 13 21:07:47.261136 containerd[1444]: time="2025-01-13T21:07:47.261092315Z" level=info msg="StartContainer for \"860b639a877e55a2809e9688b99e09c49da7199cd8850d79ce83e288cc8b2a0b\" returns successfully" Jan 13 21:07:47.267583 systemd[1]: cri-containerd-860b639a877e55a2809e9688b99e09c49da7199cd8850d79ce83e288cc8b2a0b.scope: Deactivated successfully. Jan 13 21:07:47.280282 kubelet[2515]: E0113 21:07:47.280250 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:47.305771 containerd[1444]: time="2025-01-13T21:07:47.305703460Z" level=info msg="shim disconnected" id=860b639a877e55a2809e9688b99e09c49da7199cd8850d79ce83e288cc8b2a0b namespace=k8s.io Jan 13 21:07:47.305771 containerd[1444]: time="2025-01-13T21:07:47.305755980Z" level=warning msg="cleaning up after shim disconnected" id=860b639a877e55a2809e9688b99e09c49da7199cd8850d79ce83e288cc8b2a0b namespace=k8s.io Jan 13 21:07:47.305771 containerd[1444]: time="2025-01-13T21:07:47.305764061Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:07:48.283698 kubelet[2515]: E0113 21:07:48.283405 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:48.285278 containerd[1444]: time="2025-01-13T21:07:48.285224956Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 13 21:07:49.415463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3574143915.mount: Deactivated successfully. Jan 13 21:07:50.068156 containerd[1444]: time="2025-01-13T21:07:50.068098571Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:50.069225 containerd[1444]: time="2025-01-13T21:07:50.068941064Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Jan 13 21:07:50.070201 containerd[1444]: time="2025-01-13T21:07:50.069805118Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:50.073091 containerd[1444]: time="2025-01-13T21:07:50.072899526Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:07:50.074145 containerd[1444]: time="2025-01-13T21:07:50.074110944Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.788841907s" Jan 13 21:07:50.074145 containerd[1444]: time="2025-01-13T21:07:50.074148345Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Jan 13 21:07:50.078399 containerd[1444]: time="2025-01-13T21:07:50.078350970Z" level=info msg="CreateContainer within sandbox \"886f24f92c07da32a3375bef6112796983206721d0932730bb1b2350a01b2262\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:07:50.093536 containerd[1444]: time="2025-01-13T21:07:50.093490484Z" level=info msg="CreateContainer within sandbox \"886f24f92c07da32a3375bef6112796983206721d0932730bb1b2350a01b2262\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"41bd58b126b86fd62514a122d509a144db8cfd006e5db1d4124b87293d75c536\"" Jan 13 21:07:50.094009 containerd[1444]: time="2025-01-13T21:07:50.093983412Z" level=info msg="StartContainer for \"41bd58b126b86fd62514a122d509a144db8cfd006e5db1d4124b87293d75c536\"" Jan 13 21:07:50.126385 systemd[1]: Started cri-containerd-41bd58b126b86fd62514a122d509a144db8cfd006e5db1d4124b87293d75c536.scope - libcontainer container 41bd58b126b86fd62514a122d509a144db8cfd006e5db1d4124b87293d75c536. Jan 13 21:07:50.165957 systemd[1]: cri-containerd-41bd58b126b86fd62514a122d509a144db8cfd006e5db1d4124b87293d75c536.scope: Deactivated successfully. Jan 13 21:07:50.187591 containerd[1444]: time="2025-01-13T21:07:50.187459257Z" level=info msg="StartContainer for \"41bd58b126b86fd62514a122d509a144db8cfd006e5db1d4124b87293d75c536\" returns successfully" Jan 13 21:07:50.189533 containerd[1444]: time="2025-01-13T21:07:50.189439528Z" level=info msg="shim disconnected" id=41bd58b126b86fd62514a122d509a144db8cfd006e5db1d4124b87293d75c536 namespace=k8s.io Jan 13 21:07:50.189533 containerd[1444]: time="2025-01-13T21:07:50.189484608Z" level=warning msg="cleaning up after shim disconnected" id=41bd58b126b86fd62514a122d509a144db8cfd006e5db1d4124b87293d75c536 namespace=k8s.io Jan 13 21:07:50.189533 containerd[1444]: time="2025-01-13T21:07:50.189493048Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:07:50.198484 kubelet[2515]: I0113 21:07:50.198455 2515 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:07:50.221726 kubelet[2515]: I0113 21:07:50.221661 2515 topology_manager.go:215] "Topology Admit Handler" podUID="bb43b65e-b5d6-4c09-9070-f3de94c96e53" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rcwm5" Jan 13 21:07:50.221875 kubelet[2515]: I0113 21:07:50.221849 2515 topology_manager.go:215] "Topology Admit Handler" podUID="700d6560-c6f9-43d3-9ec8-822420f74812" podNamespace="kube-system" podName="coredns-7db6d8ff4d-l8cnf" Jan 13 21:07:50.229536 systemd[1]: Created slice kubepods-burstable-pod700d6560_c6f9_43d3_9ec8_822420f74812.slice - libcontainer container kubepods-burstable-pod700d6560_c6f9_43d3_9ec8_822420f74812.slice. Jan 13 21:07:50.235806 systemd[1]: Created slice kubepods-burstable-podbb43b65e_b5d6_4c09_9070_f3de94c96e53.slice - libcontainer container kubepods-burstable-podbb43b65e_b5d6_4c09_9070_f3de94c96e53.slice. Jan 13 21:07:50.245359 kubelet[2515]: I0113 21:07:50.245319 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/700d6560-c6f9-43d3-9ec8-822420f74812-config-volume\") pod \"coredns-7db6d8ff4d-l8cnf\" (UID: \"700d6560-c6f9-43d3-9ec8-822420f74812\") " pod="kube-system/coredns-7db6d8ff4d-l8cnf" Jan 13 21:07:50.245359 kubelet[2515]: I0113 21:07:50.245360 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb43b65e-b5d6-4c09-9070-f3de94c96e53-config-volume\") pod \"coredns-7db6d8ff4d-rcwm5\" (UID: \"bb43b65e-b5d6-4c09-9070-f3de94c96e53\") " pod="kube-system/coredns-7db6d8ff4d-rcwm5" Jan 13 21:07:50.245518 kubelet[2515]: I0113 21:07:50.245380 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrkg7\" (UniqueName: \"kubernetes.io/projected/bb43b65e-b5d6-4c09-9070-f3de94c96e53-kube-api-access-xrkg7\") pod \"coredns-7db6d8ff4d-rcwm5\" (UID: \"bb43b65e-b5d6-4c09-9070-f3de94c96e53\") " pod="kube-system/coredns-7db6d8ff4d-rcwm5" Jan 13 21:07:50.245518 kubelet[2515]: I0113 21:07:50.245411 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbd75\" (UniqueName: \"kubernetes.io/projected/700d6560-c6f9-43d3-9ec8-822420f74812-kube-api-access-hbd75\") pod \"coredns-7db6d8ff4d-l8cnf\" (UID: \"700d6560-c6f9-43d3-9ec8-822420f74812\") " pod="kube-system/coredns-7db6d8ff4d-l8cnf" Jan 13 21:07:50.290337 kubelet[2515]: E0113 21:07:50.290258 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:50.292330 containerd[1444]: time="2025-01-13T21:07:50.292078355Z" level=info msg="CreateContainer within sandbox \"886f24f92c07da32a3375bef6112796983206721d0932730bb1b2350a01b2262\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 13 21:07:50.304131 containerd[1444]: time="2025-01-13T21:07:50.304087900Z" level=info msg="CreateContainer within sandbox \"886f24f92c07da32a3375bef6112796983206721d0932730bb1b2350a01b2262\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e09cdbbd0072986681ac05ec88cdcf46c1cb114c129159d9b3b402710dd1d280\"" Jan 13 21:07:50.304566 containerd[1444]: time="2025-01-13T21:07:50.304541467Z" level=info msg="StartContainer for \"e09cdbbd0072986681ac05ec88cdcf46c1cb114c129159d9b3b402710dd1d280\"" Jan 13 21:07:50.329358 systemd[1]: Started cri-containerd-e09cdbbd0072986681ac05ec88cdcf46c1cb114c129159d9b3b402710dd1d280.scope - libcontainer container e09cdbbd0072986681ac05ec88cdcf46c1cb114c129159d9b3b402710dd1d280. Jan 13 21:07:50.331585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41bd58b126b86fd62514a122d509a144db8cfd006e5db1d4124b87293d75c536-rootfs.mount: Deactivated successfully. Jan 13 21:07:50.353764 containerd[1444]: time="2025-01-13T21:07:50.353657867Z" level=info msg="StartContainer for \"e09cdbbd0072986681ac05ec88cdcf46c1cb114c129159d9b3b402710dd1d280\" returns successfully" Jan 13 21:07:50.534714 kubelet[2515]: E0113 21:07:50.534670 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:50.535435 containerd[1444]: time="2025-01-13T21:07:50.535275915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l8cnf,Uid:700d6560-c6f9-43d3-9ec8-822420f74812,Namespace:kube-system,Attempt:0,}" Jan 13 21:07:50.539162 kubelet[2515]: E0113 21:07:50.539132 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:50.540016 containerd[1444]: time="2025-01-13T21:07:50.539947627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rcwm5,Uid:bb43b65e-b5d6-4c09-9070-f3de94c96e53,Namespace:kube-system,Attempt:0,}" Jan 13 21:07:50.603049 containerd[1444]: time="2025-01-13T21:07:50.602960082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rcwm5,Uid:bb43b65e-b5d6-4c09-9070-f3de94c96e53,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"391abedba1923749e1830fe7a246520eca856524e576f37901cce081c1dd07e9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:07:50.603327 kubelet[2515]: E0113 21:07:50.603272 2515 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"391abedba1923749e1830fe7a246520eca856524e576f37901cce081c1dd07e9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:07:50.603388 kubelet[2515]: E0113 21:07:50.603349 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"391abedba1923749e1830fe7a246520eca856524e576f37901cce081c1dd07e9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-rcwm5" Jan 13 21:07:50.603388 kubelet[2515]: E0113 21:07:50.603369 2515 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"391abedba1923749e1830fe7a246520eca856524e576f37901cce081c1dd07e9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-rcwm5" Jan 13 21:07:50.603610 kubelet[2515]: E0113 21:07:50.603431 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rcwm5_kube-system(bb43b65e-b5d6-4c09-9070-f3de94c96e53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rcwm5_kube-system(bb43b65e-b5d6-4c09-9070-f3de94c96e53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"391abedba1923749e1830fe7a246520eca856524e576f37901cce081c1dd07e9\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-rcwm5" podUID="bb43b65e-b5d6-4c09-9070-f3de94c96e53" Jan 13 21:07:50.604527 containerd[1444]: time="2025-01-13T21:07:50.604474425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l8cnf,Uid:700d6560-c6f9-43d3-9ec8-822420f74812,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a1ad2bfb3918b9fcab90270338c826cf0356d87c699df682ec69e79c6edd6db6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:07:50.604678 kubelet[2515]: E0113 21:07:50.604642 2515 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ad2bfb3918b9fcab90270338c826cf0356d87c699df682ec69e79c6edd6db6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:07:50.604712 kubelet[2515]: E0113 21:07:50.604690 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ad2bfb3918b9fcab90270338c826cf0356d87c699df682ec69e79c6edd6db6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-l8cnf" Jan 13 21:07:50.604712 kubelet[2515]: E0113 21:07:50.604706 2515 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ad2bfb3918b9fcab90270338c826cf0356d87c699df682ec69e79c6edd6db6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-l8cnf" Jan 13 21:07:50.604804 kubelet[2515]: E0113 21:07:50.604739 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-l8cnf_kube-system(700d6560-c6f9-43d3-9ec8-822420f74812)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-l8cnf_kube-system(700d6560-c6f9-43d3-9ec8-822420f74812)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1ad2bfb3918b9fcab90270338c826cf0356d87c699df682ec69e79c6edd6db6\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-l8cnf" podUID="700d6560-c6f9-43d3-9ec8-822420f74812" Jan 13 21:07:51.300518 kubelet[2515]: E0113 21:07:51.300488 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:51.310241 kubelet[2515]: I0113 21:07:51.310192 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-m6t2c" podStartSLOduration=2.280747791 podStartE2EDuration="6.310157184s" podCreationTimestamp="2025-01-13 21:07:45 +0000 UTC" firstStartedPulling="2025-01-13 21:07:46.046819104 +0000 UTC m=+14.890495779" lastFinishedPulling="2025-01-13 21:07:50.076228497 +0000 UTC m=+18.919905172" observedRunningTime="2025-01-13 21:07:51.309378773 +0000 UTC m=+20.153055488" watchObservedRunningTime="2025-01-13 21:07:51.310157184 +0000 UTC m=+20.153833859" Jan 13 21:07:51.330826 systemd[1]: run-netns-cni\x2ddf33355a\x2d9dfd\x2d5b1d\x2d226a\x2d9799221d6151.mount: Deactivated successfully. Jan 13 21:07:51.330920 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-391abedba1923749e1830fe7a246520eca856524e576f37901cce081c1dd07e9-shm.mount: Deactivated successfully. Jan 13 21:07:51.330973 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1ad2bfb3918b9fcab90270338c826cf0356d87c699df682ec69e79c6edd6db6-shm.mount: Deactivated successfully. Jan 13 21:07:51.484017 systemd-networkd[1371]: flannel.1: Link UP Jan 13 21:07:51.484029 systemd-networkd[1371]: flannel.1: Gained carrier Jan 13 21:07:52.302240 kubelet[2515]: E0113 21:07:52.302201 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:07:53.362636 systemd-networkd[1371]: flannel.1: Gained IPv6LL Jan 13 21:07:59.576759 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:32922.service - OpenSSH per-connection server daemon (10.0.0.1:32922). Jan 13 21:07:59.614543 sshd[3185]: Accepted publickey for core from 10.0.0.1 port 32922 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:07:59.614846 sshd[3185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:07:59.618567 systemd-logind[1431]: New session 6 of user core. Jan 13 21:07:59.632341 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:07:59.752955 sshd[3185]: pam_unix(sshd:session): session closed for user core Jan 13 21:07:59.756110 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:32922.service: Deactivated successfully. Jan 13 21:07:59.757815 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:07:59.759883 systemd-logind[1431]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:07:59.760891 systemd-logind[1431]: Removed session 6. Jan 13 21:08:03.220789 kubelet[2515]: E0113 21:08:03.220597 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:08:03.221227 containerd[1444]: time="2025-01-13T21:08:03.220946615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rcwm5,Uid:bb43b65e-b5d6-4c09-9070-f3de94c96e53,Namespace:kube-system,Attempt:0,}" Jan 13 21:08:03.256354 systemd-networkd[1371]: cni0: Link UP Jan 13 21:08:03.256361 systemd-networkd[1371]: cni0: Gained carrier Jan 13 21:08:03.256658 systemd-networkd[1371]: cni0: Lost carrier Jan 13 21:08:03.261844 systemd-networkd[1371]: vethd2480ec9: Link UP Jan 13 21:08:03.264808 kernel: cni0: port 1(vethd2480ec9) entered blocking state Jan 13 21:08:03.264888 kernel: cni0: port 1(vethd2480ec9) entered disabled state Jan 13 21:08:03.264906 kernel: vethd2480ec9: entered allmulticast mode Jan 13 21:08:03.265699 kernel: vethd2480ec9: entered promiscuous mode Jan 13 21:08:03.266579 kernel: cni0: port 1(vethd2480ec9) entered blocking state Jan 13 21:08:03.266623 kernel: cni0: port 1(vethd2480ec9) entered forwarding state Jan 13 21:08:03.268578 kernel: cni0: port 1(vethd2480ec9) entered disabled state Jan 13 21:08:03.275882 kernel: cni0: port 1(vethd2480ec9) entered blocking state Jan 13 21:08:03.275960 kernel: cni0: port 1(vethd2480ec9) entered forwarding state Jan 13 21:08:03.275756 systemd-networkd[1371]: vethd2480ec9: Gained carrier Jan 13 21:08:03.275983 systemd-networkd[1371]: cni0: Gained carrier Jan 13 21:08:03.278187 containerd[1444]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000018938), "name":"cbr0", "type":"bridge"} Jan 13 21:08:03.278187 containerd[1444]: delegateAdd: netconf sent to delegate plugin: Jan 13 21:08:03.295269 containerd[1444]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-13T21:08:03.295034902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:08:03.295269 containerd[1444]: time="2025-01-13T21:08:03.295105422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:08:03.295269 containerd[1444]: time="2025-01-13T21:08:03.295123303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:08:03.295269 containerd[1444]: time="2025-01-13T21:08:03.295230864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:08:03.328379 systemd[1]: Started cri-containerd-d7e1e1e2457499a4edc23c301d90293d9c47ee22df863462c80f0cae9a778ace.scope - libcontainer container d7e1e1e2457499a4edc23c301d90293d9c47ee22df863462c80f0cae9a778ace. Jan 13 21:08:03.338918 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:08:03.355274 containerd[1444]: time="2025-01-13T21:08:03.355235492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rcwm5,Uid:bb43b65e-b5d6-4c09-9070-f3de94c96e53,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7e1e1e2457499a4edc23c301d90293d9c47ee22df863462c80f0cae9a778ace\"" Jan 13 21:08:03.356468 kubelet[2515]: E0113 21:08:03.355927 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:08:03.359313 containerd[1444]: time="2025-01-13T21:08:03.358787607Z" level=info msg="CreateContainer within sandbox \"d7e1e1e2457499a4edc23c301d90293d9c47ee22df863462c80f0cae9a778ace\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:08:03.385090 containerd[1444]: time="2025-01-13T21:08:03.385032464Z" level=info msg="CreateContainer within sandbox \"d7e1e1e2457499a4edc23c301d90293d9c47ee22df863462c80f0cae9a778ace\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"edb87026acb7e47267d0ac723a18eda9b1a72364b6d2d391fa098d8bde19aa11\"" Jan 13 21:08:03.385587 containerd[1444]: time="2025-01-13T21:08:03.385566549Z" level=info msg="StartContainer for \"edb87026acb7e47267d0ac723a18eda9b1a72364b6d2d391fa098d8bde19aa11\"" Jan 13 21:08:03.416340 systemd[1]: Started cri-containerd-edb87026acb7e47267d0ac723a18eda9b1a72364b6d2d391fa098d8bde19aa11.scope - libcontainer container edb87026acb7e47267d0ac723a18eda9b1a72364b6d2d391fa098d8bde19aa11. Jan 13 21:08:03.440875 containerd[1444]: time="2025-01-13T21:08:03.440834491Z" level=info msg="StartContainer for \"edb87026acb7e47267d0ac723a18eda9b1a72364b6d2d391fa098d8bde19aa11\" returns successfully" Jan 13 21:08:04.327223 kubelet[2515]: E0113 21:08:04.325705 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:08:04.336965 kubelet[2515]: I0113 21:08:04.336909 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rcwm5" podStartSLOduration=19.336895945 podStartE2EDuration="19.336895945s" podCreationTimestamp="2025-01-13 21:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:08:04.336716063 +0000 UTC m=+33.180392738" watchObservedRunningTime="2025-01-13 21:08:04.336895945 +0000 UTC m=+33.180572620" Jan 13 21:08:04.766106 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:38224.service - OpenSSH per-connection server daemon (10.0.0.1:38224). Jan 13 21:08:04.808479 sshd[3346]: Accepted publickey for core from 10.0.0.1 port 38224 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:08:04.811475 sshd[3346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:04.819449 systemd-logind[1431]: New session 7 of user core. Jan 13 21:08:04.828345 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:08:04.938968 sshd[3346]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:04.942030 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:08:04.943305 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:38224.service: Deactivated successfully. Jan 13 21:08:04.946083 systemd-logind[1431]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:08:04.947085 systemd-logind[1431]: Removed session 7. Jan 13 21:08:05.010397 systemd-networkd[1371]: vethd2480ec9: Gained IPv6LL Jan 13 21:08:05.202354 systemd-networkd[1371]: cni0: Gained IPv6LL Jan 13 21:08:05.327712 kubelet[2515]: E0113 21:08:05.327382 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:08:06.223744 kubelet[2515]: E0113 21:08:06.223708 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:08:06.224108 containerd[1444]: time="2025-01-13T21:08:06.224061817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l8cnf,Uid:700d6560-c6f9-43d3-9ec8-822420f74812,Namespace:kube-system,Attempt:0,}" Jan 13 21:08:06.243264 systemd-networkd[1371]: vethfd1d7b0d: Link UP Jan 13 21:08:06.243836 kernel: cni0: port 2(vethfd1d7b0d) entered blocking state Jan 13 21:08:06.243902 kernel: cni0: port 2(vethfd1d7b0d) entered disabled state Jan 13 21:08:06.243922 kernel: vethfd1d7b0d: entered allmulticast mode Jan 13 21:08:06.245252 kernel: vethfd1d7b0d: entered promiscuous mode Jan 13 21:08:06.249221 kernel: cni0: port 2(vethfd1d7b0d) entered blocking state Jan 13 21:08:06.249263 kernel: cni0: port 2(vethfd1d7b0d) entered forwarding state Jan 13 21:08:06.249344 systemd-networkd[1371]: vethfd1d7b0d: Gained carrier Jan 13 21:08:06.254145 containerd[1444]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000018938), "name":"cbr0", "type":"bridge"} Jan 13 21:08:06.254145 containerd[1444]: delegateAdd: netconf sent to delegate plugin: Jan 13 21:08:06.270765 containerd[1444]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-13T21:08:06.270281435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:08:06.270929 containerd[1444]: time="2025-01-13T21:08:06.270726799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:08:06.270929 containerd[1444]: time="2025-01-13T21:08:06.270815240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:08:06.271020 containerd[1444]: time="2025-01-13T21:08:06.270974441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:08:06.293338 systemd[1]: Started cri-containerd-de1c29c1f271290b603433db1c95f9472a55a9a54924909287d737b6d4f85691.scope - libcontainer container de1c29c1f271290b603433db1c95f9472a55a9a54924909287d737b6d4f85691. Jan 13 21:08:06.301926 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:08:06.316442 containerd[1444]: time="2025-01-13T21:08:06.316407412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-l8cnf,Uid:700d6560-c6f9-43d3-9ec8-822420f74812,Namespace:kube-system,Attempt:0,} returns sandbox id \"de1c29c1f271290b603433db1c95f9472a55a9a54924909287d737b6d4f85691\"" Jan 13 21:08:06.317015 kubelet[2515]: E0113 21:08:06.316991 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:08:06.321534 containerd[1444]: time="2025-01-13T21:08:06.320496369Z" level=info msg="CreateContainer within sandbox \"de1c29c1f271290b603433db1c95f9472a55a9a54924909287d737b6d4f85691\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:08:06.336815 kubelet[2515]: E0113 21:08:06.336781 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:08:06.342715 containerd[1444]: time="2025-01-13T21:08:06.342681250Z" level=info msg="CreateContainer within sandbox \"de1c29c1f271290b603433db1c95f9472a55a9a54924909287d737b6d4f85691\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ebba04d85b8fcc1694361eeb6c33a9303cc4851127a33048a8bd560ec1399d2f\"" Jan 13 21:08:06.344299 containerd[1444]: time="2025-01-13T21:08:06.344273184Z" level=info msg="StartContainer for \"ebba04d85b8fcc1694361eeb6c33a9303cc4851127a33048a8bd560ec1399d2f\"" Jan 13 21:08:06.368336 systemd[1]: Started cri-containerd-ebba04d85b8fcc1694361eeb6c33a9303cc4851127a33048a8bd560ec1399d2f.scope - libcontainer container ebba04d85b8fcc1694361eeb6c33a9303cc4851127a33048a8bd560ec1399d2f. Jan 13 21:08:06.386610 containerd[1444]: time="2025-01-13T21:08:06.386562927Z" level=info msg="StartContainer for \"ebba04d85b8fcc1694361eeb6c33a9303cc4851127a33048a8bd560ec1399d2f\" returns successfully" Jan 13 21:08:07.339752 kubelet[2515]: E0113 21:08:07.339721 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:08:07.357522 kubelet[2515]: I0113 21:08:07.357302 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-l8cnf" podStartSLOduration=22.357283788 podStartE2EDuration="22.357283788s" podCreationTimestamp="2025-01-13 21:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:08:07.348724673 +0000 UTC m=+36.192401308" watchObservedRunningTime="2025-01-13 21:08:07.357283788 +0000 UTC m=+36.200960463" Jan 13 21:08:08.274348 systemd-networkd[1371]: vethfd1d7b0d: Gained IPv6LL Jan 13 21:08:08.346569 kubelet[2515]: E0113 21:08:08.346497 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:08:09.348459 kubelet[2515]: E0113 21:08:09.348428 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:08:09.949666 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:38236.service - OpenSSH per-connection server daemon (10.0.0.1:38236). Jan 13 21:08:09.985426 sshd[3495]: Accepted publickey for core from 10.0.0.1 port 38236 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:08:09.986767 sshd[3495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:09.990344 systemd-logind[1431]: New session 8 of user core. Jan 13 21:08:09.999312 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:08:10.111330 sshd[3495]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:10.119622 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:38236.service: Deactivated successfully. Jan 13 21:08:10.121048 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:08:10.122901 systemd-logind[1431]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:08:10.130485 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:38252.service - OpenSSH per-connection server daemon (10.0.0.1:38252). Jan 13 21:08:10.131414 systemd-logind[1431]: Removed session 8. Jan 13 21:08:10.160682 sshd[3510]: Accepted publickey for core from 10.0.0.1 port 38252 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:08:10.162223 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:10.166686 systemd-logind[1431]: New session 9 of user core. Jan 13 21:08:10.172351 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:08:10.310310 sshd[3510]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:10.320838 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:38252.service: Deactivated successfully. Jan 13 21:08:10.323739 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:08:10.327583 systemd-logind[1431]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:08:10.349992 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:38264.service - OpenSSH per-connection server daemon (10.0.0.1:38264). Jan 13 21:08:10.353110 systemd-logind[1431]: Removed session 9. Jan 13 21:08:10.383823 sshd[3523]: Accepted publickey for core from 10.0.0.1 port 38264 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:08:10.385235 sshd[3523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:10.388832 systemd-logind[1431]: New session 10 of user core. Jan 13 21:08:10.399424 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:08:10.508428 sshd[3523]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:10.511914 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:38264.service: Deactivated successfully. Jan 13 21:08:10.513566 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:08:10.514212 systemd-logind[1431]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:08:10.514913 systemd-logind[1431]: Removed session 10. Jan 13 21:08:15.518821 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:53356.service - OpenSSH per-connection server daemon (10.0.0.1:53356). Jan 13 21:08:15.551684 sshd[3560]: Accepted publickey for core from 10.0.0.1 port 53356 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:08:15.552935 sshd[3560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:15.556868 systemd-logind[1431]: New session 11 of user core. Jan 13 21:08:15.569325 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:08:15.677756 sshd[3560]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:15.689696 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:53356.service: Deactivated successfully. Jan 13 21:08:15.691136 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:08:15.694244 systemd-logind[1431]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:08:15.699437 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:53362.service - OpenSSH per-connection server daemon (10.0.0.1:53362). Jan 13 21:08:15.700522 systemd-logind[1431]: Removed session 11. Jan 13 21:08:15.728024 sshd[3574]: Accepted publickey for core from 10.0.0.1 port 53362 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:08:15.729225 sshd[3574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:15.732451 systemd-logind[1431]: New session 12 of user core. Jan 13 21:08:15.742297 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:08:15.963707 sshd[3574]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:15.972558 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:53362.service: Deactivated successfully. Jan 13 21:08:15.975621 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:08:15.976952 systemd-logind[1431]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:08:15.983424 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:53374.service - OpenSSH per-connection server daemon (10.0.0.1:53374). Jan 13 21:08:15.984731 systemd-logind[1431]: Removed session 12. Jan 13 21:08:16.015836 sshd[3587]: Accepted publickey for core from 10.0.0.1 port 53374 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:08:16.017006 sshd[3587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:16.020400 systemd-logind[1431]: New session 13 of user core. Jan 13 21:08:16.028321 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:08:17.241355 sshd[3587]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:17.248435 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:53374.service: Deactivated successfully. Jan 13 21:08:17.253457 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:08:17.256831 systemd-logind[1431]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:08:17.263767 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:53378.service - OpenSSH per-connection server daemon (10.0.0.1:53378). Jan 13 21:08:17.265825 systemd-logind[1431]: Removed session 13. Jan 13 21:08:17.297653 sshd[3633]: Accepted publickey for core from 10.0.0.1 port 53378 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:08:17.299095 sshd[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:17.302901 systemd-logind[1431]: New session 14 of user core. Jan 13 21:08:17.313338 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:08:17.528909 sshd[3633]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:17.540686 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:53378.service: Deactivated successfully. Jan 13 21:08:17.543330 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:08:17.544658 systemd-logind[1431]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:08:17.556825 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:53384.service - OpenSSH per-connection server daemon (10.0.0.1:53384). Jan 13 21:08:17.558726 systemd-logind[1431]: Removed session 14. Jan 13 21:08:17.585810 sshd[3647]: Accepted publickey for core from 10.0.0.1 port 53384 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:08:17.587160 sshd[3647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:17.590979 systemd-logind[1431]: New session 15 of user core. Jan 13 21:08:17.601369 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:08:17.716579 sshd[3647]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:17.719861 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:53384.service: Deactivated successfully. Jan 13 21:08:17.721731 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:08:17.722412 systemd-logind[1431]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:08:17.723543 systemd-logind[1431]: Removed session 15. Jan 13 21:08:22.740510 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:40774.service - OpenSSH per-connection server daemon (10.0.0.1:40774). Jan 13 21:08:22.770480 sshd[3686]: Accepted publickey for core from 10.0.0.1 port 40774 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:08:22.770929 sshd[3686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:22.774413 systemd-logind[1431]: New session 16 of user core. Jan 13 21:08:22.794326 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:08:22.900907 sshd[3686]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:22.903934 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:40774.service: Deactivated successfully. Jan 13 21:08:22.905951 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:08:22.907747 systemd-logind[1431]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:08:22.908653 systemd-logind[1431]: Removed session 16. Jan 13 21:08:27.912544 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:40778.service - OpenSSH per-connection server daemon (10.0.0.1:40778). Jan 13 21:08:27.945185 sshd[3722]: Accepted publickey for core from 10.0.0.1 port 40778 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:08:27.946654 sshd[3722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:27.950090 systemd-logind[1431]: New session 17 of user core. Jan 13 21:08:27.961329 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:08:28.062845 sshd[3722]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:28.066508 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:40778.service: Deactivated successfully. Jan 13 21:08:28.068165 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:08:28.068940 systemd-logind[1431]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:08:28.069716 systemd-logind[1431]: Removed session 17. Jan 13 21:08:33.073724 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:42580.service - OpenSSH per-connection server daemon (10.0.0.1:42580). Jan 13 21:08:33.107130 sshd[3759]: Accepted publickey for core from 10.0.0.1 port 42580 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:08:33.108528 sshd[3759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:33.111921 systemd-logind[1431]: New session 18 of user core. Jan 13 21:08:33.117359 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:08:33.222431 sshd[3759]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:33.226232 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:42580.service: Deactivated successfully. Jan 13 21:08:33.227972 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:08:33.228670 systemd-logind[1431]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:08:33.229428 systemd-logind[1431]: Removed session 18. Jan 13 21:08:38.233344 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:42594.service - OpenSSH per-connection server daemon (10.0.0.1:42594). Jan 13 21:08:38.267189 sshd[3794]: Accepted publickey for core from 10.0.0.1 port 42594 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:08:38.267675 sshd[3794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:08:38.273536 systemd-logind[1431]: New session 19 of user core. Jan 13 21:08:38.280378 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:08:38.390304 sshd[3794]: pam_unix(sshd:session): session closed for user core Jan 13 21:08:38.393101 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:42594.service: Deactivated successfully. Jan 13 21:08:38.394733 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:08:38.396238 systemd-logind[1431]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:08:38.397026 systemd-logind[1431]: Removed session 19.