Mar 17 17:35:01.902662 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:35:01.902683 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Mar 17 16:05:23 -00 2025 Mar 17 17:35:01.902693 kernel: KASLR enabled Mar 17 17:35:01.902699 kernel: efi: EFI v2.7 by EDK II Mar 17 17:35:01.902705 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Mar 17 17:35:01.902711 kernel: random: crng init done Mar 17 17:35:01.902718 kernel: secureboot: Secure boot disabled Mar 17 17:35:01.902724 kernel: ACPI: Early table checksum verification disabled Mar 17 17:35:01.902730 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 17 17:35:01.902738 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:35:01.902744 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:35:01.902751 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:35:01.902757 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:35:01.902763 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:35:01.902771 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:35:01.902779 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:35:01.902785 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:35:01.902792 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:35:01.902798 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:35:01.902804 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 17 17:35:01.902811 kernel: NUMA: Failed to initialise from firmware Mar 17 17:35:01.902822 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:35:01.902832 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Mar 17 17:35:01.902838 kernel: Zone ranges: Mar 17 17:35:01.902844 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:35:01.902852 kernel: DMA32 empty Mar 17 17:35:01.902859 kernel: Normal empty Mar 17 17:35:01.902865 kernel: Movable zone start for each node Mar 17 17:35:01.902872 kernel: Early memory node ranges Mar 17 17:35:01.902878 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Mar 17 17:35:01.902884 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 17 17:35:01.902891 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 17 17:35:01.902897 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 17 17:35:01.902904 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 17 17:35:01.902910 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 17 17:35:01.902916 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 17 17:35:01.902922 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:35:01.902930 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 17 17:35:01.902937 kernel: psci: probing for conduit method from ACPI. Mar 17 17:35:01.902943 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:35:01.902952 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:35:01.902959 kernel: psci: Trusted OS migration not required Mar 17 17:35:01.902966 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:35:01.902974 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 17:35:01.902981 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:35:01.902988 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:35:01.902995 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 17 17:35:01.903003 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:35:01.903009 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:35:01.903016 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:35:01.903023 kernel: CPU features: detected: Spectre-v4 Mar 17 17:35:01.903029 kernel: CPU features: detected: Spectre-BHB Mar 17 17:35:01.903036 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:35:01.903044 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:35:01.903051 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:35:01.903058 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:35:01.903065 kernel: alternatives: applying boot alternatives Mar 17 17:35:01.903073 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:35:01.903080 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:35:01.903087 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:35:01.903094 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:35:01.903100 kernel: Fallback order for Node 0: 0 Mar 17 17:35:01.903107 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 17 17:35:01.903114 kernel: Policy zone: DMA Mar 17 17:35:01.903122 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:35:01.903128 kernel: software IO TLB: area num 4. Mar 17 17:35:01.903135 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 17 17:35:01.903142 kernel: Memory: 2386260K/2572288K available (10240K kernel code, 2186K rwdata, 8100K rodata, 39744K init, 897K bss, 186028K reserved, 0K cma-reserved) Mar 17 17:35:01.903149 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:35:01.903156 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:35:01.903163 kernel: rcu: RCU event tracing is enabled. Mar 17 17:35:01.903170 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:35:01.903258 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:35:01.903266 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:35:01.903273 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:35:01.903280 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:35:01.903290 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:35:01.903296 kernel: GICv3: 256 SPIs implemented Mar 17 17:35:01.903303 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:35:01.903310 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:35:01.903317 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:35:01.903323 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 17:35:01.903330 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 17:35:01.903337 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:35:01.903344 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:35:01.903351 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 17 17:35:01.903358 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 17 17:35:01.903366 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:35:01.903373 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:35:01.903380 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:35:01.903387 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:35:01.903394 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:35:01.903401 kernel: arm-pv: using stolen time PV Mar 17 17:35:01.903408 kernel: Console: colour dummy device 80x25 Mar 17 17:35:01.903415 kernel: ACPI: Core revision 20230628 Mar 17 17:35:01.903430 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:35:01.903438 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:35:01.903447 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:35:01.903454 kernel: landlock: Up and running. Mar 17 17:35:01.903461 kernel: SELinux: Initializing. Mar 17 17:35:01.903468 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:35:01.903475 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:35:01.903482 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:35:01.903489 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:35:01.903496 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:35:01.903503 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:35:01.903511 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 17:35:01.903518 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 17:35:01.903525 kernel: Remapping and enabling EFI services. Mar 17 17:35:01.903532 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:35:01.903539 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:35:01.903546 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 17:35:01.903553 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 17 17:35:01.903560 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:35:01.903567 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:35:01.903574 kernel: Detected PIPT I-cache on CPU2 Mar 17 17:35:01.903583 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 17 17:35:01.903590 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 17 17:35:01.903601 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:35:01.903610 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 17 17:35:01.903617 kernel: Detected PIPT I-cache on CPU3 Mar 17 17:35:01.903624 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 17 17:35:01.903631 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 17 17:35:01.903639 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:35:01.903646 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 17 17:35:01.903655 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:35:01.903662 kernel: SMP: Total of 4 processors activated. Mar 17 17:35:01.903670 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:35:01.903680 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:35:01.903687 kernel: CPU features: detected: Common not Private translations Mar 17 17:35:01.903695 kernel: CPU features: detected: CRC32 instructions Mar 17 17:35:01.903702 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 17 17:35:01.903709 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:35:01.903718 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:35:01.903726 kernel: CPU features: detected: Privileged Access Never Mar 17 17:35:01.903733 kernel: CPU features: detected: RAS Extension Support Mar 17 17:35:01.903741 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 17:35:01.903748 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:35:01.903755 kernel: alternatives: applying system-wide alternatives Mar 17 17:35:01.903763 kernel: devtmpfs: initialized Mar 17 17:35:01.903770 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:35:01.903778 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:35:01.903786 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:35:01.903793 kernel: SMBIOS 3.0.0 present. Mar 17 17:35:01.903801 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 17 17:35:01.903808 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:35:01.903815 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:35:01.903823 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:35:01.903830 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:35:01.903838 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:35:01.903845 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Mar 17 17:35:01.903854 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:35:01.903861 kernel: cpuidle: using governor menu Mar 17 17:35:01.903869 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:35:01.903876 kernel: ASID allocator initialised with 32768 entries Mar 17 17:35:01.903884 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:35:01.903891 kernel: Serial: AMBA PL011 UART driver Mar 17 17:35:01.903898 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:35:01.903906 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:35:01.903913 kernel: Modules: 508944 pages in range for PLT usage Mar 17 17:35:01.903921 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:35:01.903929 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:35:01.903936 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:35:01.903944 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:35:01.903951 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:35:01.903958 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:35:01.903966 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:35:01.903973 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:35:01.903980 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:35:01.903989 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:35:01.903996 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:35:01.904004 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:35:01.904011 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:35:01.904018 kernel: ACPI: Interpreter enabled Mar 17 17:35:01.904025 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:35:01.904033 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:35:01.904040 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:35:01.904048 kernel: printk: console [ttyAMA0] enabled Mar 17 17:35:01.904057 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:35:01.904230 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:35:01.904311 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:35:01.904382 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:35:01.904461 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 17:35:01.904532 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 17:35:01.904541 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 17:35:01.904552 kernel: PCI host bridge to bus 0000:00 Mar 17 17:35:01.904626 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 17:35:01.904689 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:35:01.904749 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 17:35:01.904809 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:35:01.904894 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 17:35:01.904973 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:35:01.905046 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 17 17:35:01.905115 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 17 17:35:01.905197 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:35:01.905271 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:35:01.905341 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 17 17:35:01.905413 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 17 17:35:01.905486 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 17:35:01.905550 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:35:01.905610 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 17:35:01.905620 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:35:01.905628 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:35:01.905635 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:35:01.905642 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:35:01.905650 kernel: iommu: Default domain type: Translated Mar 17 17:35:01.905657 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:35:01.905666 kernel: efivars: Registered efivars operations Mar 17 17:35:01.905674 kernel: vgaarb: loaded Mar 17 17:35:01.905681 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:35:01.905689 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:35:01.905696 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:35:01.905704 kernel: pnp: PnP ACPI init Mar 17 17:35:01.905781 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 17:35:01.905792 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:35:01.905801 kernel: NET: Registered PF_INET protocol family Mar 17 17:35:01.905809 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:35:01.905817 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:35:01.905824 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:35:01.905832 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:35:01.905839 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:35:01.905847 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:35:01.905854 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:35:01.905862 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:35:01.905871 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:35:01.905878 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:35:01.905886 kernel: kvm [1]: HYP mode not available Mar 17 17:35:01.905893 kernel: Initialise system trusted keyrings Mar 17 17:35:01.905900 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:35:01.905908 kernel: Key type asymmetric registered Mar 17 17:35:01.905915 kernel: Asymmetric key parser 'x509' registered Mar 17 17:35:01.905923 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:35:01.905930 kernel: io scheduler mq-deadline registered Mar 17 17:35:01.905939 kernel: io scheduler kyber registered Mar 17 17:35:01.905946 kernel: io scheduler bfq registered Mar 17 17:35:01.905953 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:35:01.905961 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:35:01.905969 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:35:01.906042 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 17 17:35:01.906052 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:35:01.906060 kernel: thunder_xcv, ver 1.0 Mar 17 17:35:01.906067 kernel: thunder_bgx, ver 1.0 Mar 17 17:35:01.906076 kernel: nicpf, ver 1.0 Mar 17 17:35:01.906083 kernel: nicvf, ver 1.0 Mar 17 17:35:01.906156 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:35:01.906260 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:35:01 UTC (1742232901) Mar 17 17:35:01.906271 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:35:01.906279 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 17:35:01.906286 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:35:01.906294 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:35:01.906304 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:35:01.906312 kernel: Segment Routing with IPv6 Mar 17 17:35:01.906319 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:35:01.906326 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:35:01.906333 kernel: Key type dns_resolver registered Mar 17 17:35:01.906342 kernel: registered taskstats version 1 Mar 17 17:35:01.906349 kernel: Loading compiled-in X.509 certificates Mar 17 17:35:01.906356 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 74c9b4f5dfad711856d7363c976664fc02c1e24c' Mar 17 17:35:01.906364 kernel: Key type .fscrypt registered Mar 17 17:35:01.906372 kernel: Key type fscrypt-provisioning registered Mar 17 17:35:01.906380 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:35:01.906387 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:35:01.906394 kernel: ima: No architecture policies found Mar 17 17:35:01.906401 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:35:01.906408 kernel: clk: Disabling unused clocks Mar 17 17:35:01.906415 kernel: Freeing unused kernel memory: 39744K Mar 17 17:35:01.906430 kernel: Run /init as init process Mar 17 17:35:01.906437 kernel: with arguments: Mar 17 17:35:01.906446 kernel: /init Mar 17 17:35:01.906453 kernel: with environment: Mar 17 17:35:01.906461 kernel: HOME=/ Mar 17 17:35:01.906468 kernel: TERM=linux Mar 17 17:35:01.906475 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:35:01.906484 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:35:01.906493 systemd[1]: Detected virtualization kvm. Mar 17 17:35:01.906501 systemd[1]: Detected architecture arm64. Mar 17 17:35:01.906510 systemd[1]: Running in initrd. Mar 17 17:35:01.906517 systemd[1]: No hostname configured, using default hostname. Mar 17 17:35:01.906525 systemd[1]: Hostname set to . Mar 17 17:35:01.906533 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:35:01.906540 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:35:01.906548 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:35:01.906556 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:35:01.906564 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:35:01.906574 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:35:01.906582 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:35:01.906590 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:35:01.906599 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:35:01.906607 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:35:01.906615 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:35:01.906622 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:35:01.906631 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:35:01.906639 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:35:01.906647 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:35:01.906655 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:35:01.906662 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:35:01.906670 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:35:01.906678 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:35:01.906686 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:35:01.906695 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:35:01.906703 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:35:01.906710 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:35:01.906718 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:35:01.906726 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:35:01.906734 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:35:01.906742 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:35:01.906750 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:35:01.906758 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:35:01.906767 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:35:01.906776 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:35:01.906783 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:35:01.906794 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:35:01.906803 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:35:01.906811 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:35:01.906837 systemd-journald[239]: Collecting audit messages is disabled. Mar 17 17:35:01.906856 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:35:01.906866 systemd-journald[239]: Journal started Mar 17 17:35:01.906885 systemd-journald[239]: Runtime Journal (/run/log/journal/1151b1fe32884c008377f9f3a8ee5927) is 5.9M, max 47.3M, 41.4M free. Mar 17 17:35:01.901474 systemd-modules-load[240]: Inserted module 'overlay' Mar 17 17:35:01.908829 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:35:01.911201 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:35:01.919189 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:35:01.920327 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:35:01.922364 kernel: Bridge firewalling registered Mar 17 17:35:01.920505 systemd-modules-load[240]: Inserted module 'br_netfilter' Mar 17 17:35:01.922012 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:35:01.924632 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:35:01.925934 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:35:01.929224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:35:01.933253 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:35:01.936974 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:35:01.945382 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:35:01.946348 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:35:01.947987 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:35:01.951681 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:35:01.955937 dracut-cmdline[275]: dracut-dracut-053 Mar 17 17:35:01.958613 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:35:01.979813 systemd-resolved[283]: Positive Trust Anchors: Mar 17 17:35:01.979887 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:35:01.979917 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:35:01.984548 systemd-resolved[283]: Defaulting to hostname 'linux'. Mar 17 17:35:01.985510 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:35:01.987268 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:35:02.027208 kernel: SCSI subsystem initialized Mar 17 17:35:02.032195 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:35:02.040203 kernel: iscsi: registered transport (tcp) Mar 17 17:35:02.053193 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:35:02.053219 kernel: QLogic iSCSI HBA Driver Mar 17 17:35:02.100814 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:35:02.108346 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:35:02.123207 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:35:02.123239 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:35:02.124202 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:35:02.173208 kernel: raid6: neonx8 gen() 15779 MB/s Mar 17 17:35:02.190193 kernel: raid6: neonx4 gen() 15628 MB/s Mar 17 17:35:02.207209 kernel: raid6: neonx2 gen() 13227 MB/s Mar 17 17:35:02.224212 kernel: raid6: neonx1 gen() 10492 MB/s Mar 17 17:35:02.241212 kernel: raid6: int64x8 gen() 6959 MB/s Mar 17 17:35:02.258204 kernel: raid6: int64x4 gen() 7346 MB/s Mar 17 17:35:02.275196 kernel: raid6: int64x2 gen() 6128 MB/s Mar 17 17:35:02.292211 kernel: raid6: int64x1 gen() 5053 MB/s Mar 17 17:35:02.292250 kernel: raid6: using algorithm neonx8 gen() 15779 MB/s Mar 17 17:35:02.309210 kernel: raid6: .... xor() 11915 MB/s, rmw enabled Mar 17 17:35:02.309230 kernel: raid6: using neon recovery algorithm Mar 17 17:35:02.314224 kernel: xor: measuring software checksum speed Mar 17 17:35:02.314243 kernel: 8regs : 19773 MB/sec Mar 17 17:35:02.315244 kernel: 32regs : 19655 MB/sec Mar 17 17:35:02.315261 kernel: arm64_neon : 27043 MB/sec Mar 17 17:35:02.315279 kernel: xor: using function: arm64_neon (27043 MB/sec) Mar 17 17:35:02.365216 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:35:02.375888 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:35:02.387380 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:35:02.400222 systemd-udevd[465]: Using default interface naming scheme 'v255'. Mar 17 17:35:02.403344 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:35:02.419399 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:35:02.430226 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Mar 17 17:35:02.454603 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:35:02.467319 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:35:02.505116 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:35:02.514334 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:35:02.524970 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:35:02.527658 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:35:02.528579 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:35:02.530067 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:35:02.536318 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:35:02.546408 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:35:02.549914 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 17 17:35:02.557416 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:35:02.557535 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:35:02.557547 kernel: GPT:9289727 != 19775487 Mar 17 17:35:02.557556 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:35:02.557572 kernel: GPT:9289727 != 19775487 Mar 17 17:35:02.557580 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:35:02.557589 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:35:02.555270 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:35:02.555378 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:35:02.560467 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:35:02.563107 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:35:02.563323 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:35:02.564899 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:35:02.575512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:35:02.582230 kernel: BTRFS: device fsid c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (515) Mar 17 17:35:02.584203 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (512) Mar 17 17:35:02.587210 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:35:02.591669 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:35:02.596259 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:35:02.603064 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:35:02.603987 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:35:02.608908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:35:02.619316 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:35:02.620831 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:35:02.626894 disk-uuid[557]: Primary Header is updated. Mar 17 17:35:02.626894 disk-uuid[557]: Secondary Entries is updated. Mar 17 17:35:02.626894 disk-uuid[557]: Secondary Header is updated. Mar 17 17:35:02.630199 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:35:02.646666 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:35:03.646193 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:35:03.647284 disk-uuid[558]: The operation has completed successfully. Mar 17 17:35:03.664536 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:35:03.664633 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:35:03.688327 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:35:03.692020 sh[576]: Success Mar 17 17:35:03.705194 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:35:03.732747 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:35:03.747552 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:35:03.750216 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:35:03.758691 kernel: BTRFS info (device dm-0): first mount of filesystem c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 Mar 17 17:35:03.758725 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:35:03.758736 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:35:03.759451 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:35:03.760451 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:35:03.763620 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:35:03.764726 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:35:03.765450 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:35:03.767985 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:35:03.778728 kernel: BTRFS info (device vda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:35:03.778770 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:35:03.779297 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:35:03.781190 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:35:03.788468 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:35:03.789837 kernel: BTRFS info (device vda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:35:03.794747 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:35:03.803568 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:35:03.862692 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:35:03.871373 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:35:03.909206 systemd-networkd[768]: lo: Link UP Mar 17 17:35:03.909218 systemd-networkd[768]: lo: Gained carrier Mar 17 17:35:03.909994 systemd-networkd[768]: Enumeration completed Mar 17 17:35:03.910508 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:35:03.910511 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:35:03.911881 ignition[671]: Ignition 2.20.0 Mar 17 17:35:03.911328 systemd-networkd[768]: eth0: Link UP Mar 17 17:35:03.911888 ignition[671]: Stage: fetch-offline Mar 17 17:35:03.911331 systemd-networkd[768]: eth0: Gained carrier Mar 17 17:35:03.911920 ignition[671]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:35:03.911338 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:35:03.911928 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:35:03.913008 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:35:03.912074 ignition[671]: parsed url from cmdline: "" Mar 17 17:35:03.914214 systemd[1]: Reached target network.target - Network. Mar 17 17:35:03.912077 ignition[671]: no config URL provided Mar 17 17:35:03.912082 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:35:03.912088 ignition[671]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:35:03.912113 ignition[671]: op(1): [started] loading QEMU firmware config module Mar 17 17:35:03.912118 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:35:03.923246 ignition[671]: op(1): [finished] loading QEMU firmware config module Mar 17 17:35:03.923266 ignition[671]: QEMU firmware config was not found. Ignoring... Mar 17 17:35:03.940219 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:35:03.948891 ignition[671]: parsing config with SHA512: e595913f3135fba4ead1fbebbdcb34fe9cd3e90953c6d15b38b065700371c730f615c741bc610b54dae271ee5959f7c4e792b56a0cde1c98469694578c11cea9 Mar 17 17:35:03.953258 unknown[671]: fetched base config from "system" Mar 17 17:35:03.953268 unknown[671]: fetched user config from "qemu" Mar 17 17:35:03.953655 ignition[671]: fetch-offline: fetch-offline passed Mar 17 17:35:03.953724 ignition[671]: Ignition finished successfully Mar 17 17:35:03.955359 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:35:03.957128 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:35:03.970340 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:35:03.980212 ignition[774]: Ignition 2.20.0 Mar 17 17:35:03.980224 ignition[774]: Stage: kargs Mar 17 17:35:03.980381 ignition[774]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:35:03.980390 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:35:03.981227 ignition[774]: kargs: kargs passed Mar 17 17:35:03.981274 ignition[774]: Ignition finished successfully Mar 17 17:35:03.984244 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:35:03.991341 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:35:03.994034 systemd-resolved[283]: Detected conflict on linux IN A 10.0.0.106 Mar 17 17:35:03.994048 systemd-resolved[283]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Mar 17 17:35:04.001522 ignition[783]: Ignition 2.20.0 Mar 17 17:35:04.001533 ignition[783]: Stage: disks Mar 17 17:35:04.001696 ignition[783]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:35:04.001707 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:35:04.003928 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:35:04.002583 ignition[783]: disks: disks passed Mar 17 17:35:04.005337 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:35:04.002629 ignition[783]: Ignition finished successfully Mar 17 17:35:04.006873 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:35:04.008344 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:35:04.009964 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:35:04.011369 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:35:04.017299 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:35:04.027019 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:35:04.030571 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:35:04.032696 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:35:04.078313 kernel: EXT4-fs (vda9): mounted filesystem 6b579bf2-7716-4d59-98eb-b92ea668693e r/w with ordered data mode. Quota mode: none. Mar 17 17:35:04.078733 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:35:04.079992 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:35:04.088256 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:35:04.090302 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:35:04.091356 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:35:04.091396 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:35:04.091418 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:35:04.097647 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) Mar 17 17:35:04.098251 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:35:04.101042 kernel: BTRFS info (device vda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:35:04.101061 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:35:04.101070 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:35:04.101087 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:35:04.102765 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:35:04.112346 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:35:04.151037 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:35:04.154793 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:35:04.158590 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:35:04.161951 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:35:04.234315 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:35:04.247369 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:35:04.249751 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:35:04.254204 kernel: BTRFS info (device vda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:35:04.268240 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:35:04.271739 ignition[913]: INFO : Ignition 2.20.0 Mar 17 17:35:04.271739 ignition[913]: INFO : Stage: mount Mar 17 17:35:04.273020 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:35:04.273020 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:35:04.273020 ignition[913]: INFO : mount: mount passed Mar 17 17:35:04.273020 ignition[913]: INFO : Ignition finished successfully Mar 17 17:35:04.274399 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:35:04.284355 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:35:04.758172 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:35:04.767419 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:35:04.773552 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (928) Mar 17 17:35:04.773579 kernel: BTRFS info (device vda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:35:04.773590 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:35:04.774294 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:35:04.777203 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:35:04.777812 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:35:04.793668 ignition[945]: INFO : Ignition 2.20.0 Mar 17 17:35:04.793668 ignition[945]: INFO : Stage: files Mar 17 17:35:04.795080 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:35:04.795080 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:35:04.795080 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:35:04.797838 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:35:04.797838 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:35:04.797838 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:35:04.797838 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:35:04.797838 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:35:04.797788 unknown[945]: wrote ssh authorized keys file for user: core Mar 17 17:35:04.803612 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:35:04.803612 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 17:35:04.839463 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:35:05.007393 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:35:05.008895 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:35:05.008895 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:35:05.008895 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:35:05.008895 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:35:05.008895 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:35:05.008895 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:35:05.008895 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:35:05.008895 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:35:05.008895 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:35:05.008895 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:35:05.008895 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:35:05.008895 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:35:05.008895 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:35:05.008895 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 17 17:35:05.186434 systemd-networkd[768]: eth0: Gained IPv6LL Mar 17 17:35:05.244431 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 17 17:35:05.476303 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 17:35:05.476303 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 17 17:35:05.478942 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:35:05.478942 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:35:05.478942 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 17 17:35:05.478942 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 17 17:35:05.478942 ignition[945]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:35:05.478942 ignition[945]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:35:05.478942 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 17 17:35:05.478942 ignition[945]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:35:05.501457 ignition[945]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:35:05.505157 ignition[945]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:35:05.507305 ignition[945]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:35:05.507305 ignition[945]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:35:05.507305 ignition[945]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:35:05.507305 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:35:05.507305 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:35:05.507305 ignition[945]: INFO : files: files passed Mar 17 17:35:05.507305 ignition[945]: INFO : Ignition finished successfully Mar 17 17:35:05.507778 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:35:05.516377 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:35:05.517836 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:35:05.521122 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:35:05.521959 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:35:05.525443 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:35:05.527404 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:35:05.527404 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:35:05.529822 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:35:05.529680 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:35:05.532107 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:35:05.542340 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:35:05.561884 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:35:05.561999 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:35:05.563781 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:35:05.565150 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:35:05.566590 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:35:05.567438 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:35:05.583503 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:35:05.595340 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:35:05.603290 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:35:05.604241 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:35:05.605884 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:35:05.607298 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:35:05.607417 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:35:05.609315 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:35:05.610825 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:35:05.612036 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:35:05.613362 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:35:05.614835 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:35:05.616301 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:35:05.617698 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:35:05.619140 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:35:05.620754 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:35:05.622050 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:35:05.623215 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:35:05.623343 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:35:05.625069 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:35:05.626507 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:35:05.627965 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:35:05.631262 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:35:05.632227 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:35:05.632341 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:35:05.634582 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:35:05.634701 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:35:05.636251 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:35:05.637505 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:35:05.641264 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:35:05.643202 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:35:05.643915 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:35:05.645261 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:35:05.645394 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:35:05.646519 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:35:05.646652 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:35:05.647755 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:35:05.647909 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:35:05.649217 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:35:05.649361 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:35:05.660442 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:35:05.662589 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:35:05.663245 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:35:05.663419 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:35:05.664892 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:35:05.665044 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:35:05.671458 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:35:05.671556 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:35:05.673713 ignition[1000]: INFO : Ignition 2.20.0 Mar 17 17:35:05.673713 ignition[1000]: INFO : Stage: umount Mar 17 17:35:05.673713 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:35:05.673713 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:35:05.680735 ignition[1000]: INFO : umount: umount passed Mar 17 17:35:05.680735 ignition[1000]: INFO : Ignition finished successfully Mar 17 17:35:05.675605 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:35:05.675689 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:35:05.677753 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:35:05.678559 systemd[1]: Stopped target network.target - Network. Mar 17 17:35:05.679605 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:35:05.679676 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:35:05.681525 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:35:05.681579 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:35:05.682905 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:35:05.682944 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:35:05.684503 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:35:05.684544 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:35:05.686322 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:35:05.689420 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:35:05.694227 systemd-networkd[768]: eth0: DHCPv6 lease lost Mar 17 17:35:05.696041 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:35:05.696910 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:35:05.698898 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:35:05.698998 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:35:05.701734 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:35:05.701774 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:35:05.717303 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:35:05.718221 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:35:05.718295 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:35:05.720229 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:35:05.720274 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:35:05.721889 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:35:05.721938 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:35:05.724168 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:35:05.724229 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:35:05.726075 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:35:05.736119 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:35:05.736911 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:35:05.738485 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:35:05.738598 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:35:05.740811 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:35:05.740936 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:35:05.742610 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:35:05.742652 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:35:05.744043 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:35:05.744073 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:35:05.745450 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:35:05.745493 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:35:05.747452 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:35:05.747494 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:35:05.749498 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:35:05.749539 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:35:05.751733 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:35:05.751779 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:35:05.770349 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:35:05.771195 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:35:05.771258 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:35:05.772889 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:35:05.772930 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:35:05.774403 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:35:05.774438 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:35:05.776150 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:35:05.776209 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:35:05.778022 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:35:05.779204 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:35:05.780775 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:35:05.782750 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:35:05.792442 systemd[1]: Switching root. Mar 17 17:35:05.823210 systemd-journald[239]: Journal stopped Mar 17 17:35:06.534898 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Mar 17 17:35:06.534950 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:35:06.534962 kernel: SELinux: policy capability open_perms=1 Mar 17 17:35:06.534972 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:35:06.534981 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:35:06.534991 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:35:06.535006 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:35:06.535019 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:35:06.535029 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:35:06.535038 kernel: audit: type=1403 audit(1742232905.965:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:35:06.535049 systemd[1]: Successfully loaded SELinux policy in 32.705ms. Mar 17 17:35:06.535069 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.711ms. Mar 17 17:35:06.535081 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:35:06.535092 systemd[1]: Detected virtualization kvm. Mar 17 17:35:06.535102 systemd[1]: Detected architecture arm64. Mar 17 17:35:06.535114 systemd[1]: Detected first boot. Mar 17 17:35:06.535124 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:35:06.535137 zram_generator::config[1045]: No configuration found. Mar 17 17:35:06.535148 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:35:06.535158 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:35:06.535169 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:35:06.535192 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:35:06.535203 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:35:06.535216 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:35:06.535226 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:35:06.535237 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:35:06.535248 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:35:06.535258 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:35:06.535268 kernel: hrtimer: interrupt took 6699040 ns Mar 17 17:35:06.535278 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:35:06.535289 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:35:06.535299 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:35:06.535312 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:35:06.535323 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:35:06.535336 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:35:06.535347 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:35:06.535358 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:35:06.535369 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 17:35:06.535381 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:35:06.535392 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:35:06.535402 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:35:06.535415 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:35:06.535426 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:35:06.535436 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:35:06.535447 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:35:06.535459 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:35:06.535469 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:35:06.535480 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:35:06.535491 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:35:06.535502 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:35:06.535513 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:35:06.535524 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:35:06.535534 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:35:06.535544 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:35:06.535555 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:35:06.535565 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:35:06.535576 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:35:06.535593 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:35:06.535606 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:35:06.535618 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:35:06.535629 systemd[1]: Reached target machines.target - Containers. Mar 17 17:35:06.535639 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:35:06.535650 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:35:06.535660 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:35:06.535671 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:35:06.535681 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:35:06.535693 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:35:06.535703 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:35:06.535713 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:35:06.535723 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:35:06.535734 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:35:06.535744 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:35:06.535754 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:35:06.535764 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:35:06.535776 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:35:06.535786 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:35:06.535796 kernel: loop: module loaded Mar 17 17:35:06.535805 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:35:06.535815 kernel: fuse: init (API version 7.39) Mar 17 17:35:06.535882 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:35:06.535899 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:35:06.535910 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:35:06.535920 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:35:06.535930 systemd[1]: Stopped verity-setup.service. Mar 17 17:35:06.535943 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:35:06.535954 kernel: ACPI: bus type drm_connector registered Mar 17 17:35:06.535963 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:35:06.535999 systemd-journald[1109]: Collecting audit messages is disabled. Mar 17 17:35:06.536021 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:35:06.536032 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:35:06.536042 systemd-journald[1109]: Journal started Mar 17 17:35:06.536065 systemd-journald[1109]: Runtime Journal (/run/log/journal/1151b1fe32884c008377f9f3a8ee5927) is 5.9M, max 47.3M, 41.4M free. Mar 17 17:35:06.319513 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:35:06.337140 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:35:06.337474 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:35:06.537863 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:35:06.538465 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:35:06.539506 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:35:06.540453 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:35:06.541756 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:35:06.541895 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:35:06.543096 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:35:06.544275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:35:06.544405 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:35:06.545687 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:35:06.545826 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:35:06.546856 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:35:06.546983 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:35:06.548217 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:35:06.548355 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:35:06.549409 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:35:06.549546 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:35:06.550616 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:35:06.551723 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:35:06.552896 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:35:06.565736 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:35:06.580301 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:35:06.582204 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:35:06.583056 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:35:06.583094 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:35:06.584964 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:35:06.586892 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:35:06.588750 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:35:06.589664 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:35:06.591132 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:35:06.592875 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:35:06.593766 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:35:06.597352 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:35:06.598345 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:35:06.600887 systemd-journald[1109]: Time spent on flushing to /var/log/journal/1151b1fe32884c008377f9f3a8ee5927 is 14.240ms for 858 entries. Mar 17 17:35:06.600887 systemd-journald[1109]: System Journal (/var/log/journal/1151b1fe32884c008377f9f3a8ee5927) is 8.0M, max 195.6M, 187.6M free. Mar 17 17:35:06.620094 systemd-journald[1109]: Received client request to flush runtime journal. Mar 17 17:35:06.601411 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:35:06.604404 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:35:06.607343 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:35:06.609804 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:35:06.611465 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:35:06.612591 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:35:06.624614 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:35:06.630045 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:35:06.631661 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:35:06.637212 kernel: loop0: detected capacity change from 0 to 116808 Mar 17 17:35:06.638363 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:35:06.641932 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:35:06.648500 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Mar 17 17:35:06.660296 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:35:06.648517 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Mar 17 17:35:06.654504 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:35:06.659380 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:35:06.660563 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:35:06.667823 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:35:06.672885 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:35:06.681092 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:35:06.681699 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:35:06.693238 kernel: loop1: detected capacity change from 0 to 189592 Mar 17 17:35:06.698220 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:35:06.705352 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:35:06.718311 kernel: loop2: detected capacity change from 0 to 113536 Mar 17 17:35:06.719220 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Mar 17 17:35:06.719236 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Mar 17 17:35:06.723903 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:35:06.751197 kernel: loop3: detected capacity change from 0 to 116808 Mar 17 17:35:06.757202 kernel: loop4: detected capacity change from 0 to 189592 Mar 17 17:35:06.766203 kernel: loop5: detected capacity change from 0 to 113536 Mar 17 17:35:06.769536 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:35:06.769948 (sd-merge)[1185]: Merged extensions into '/usr'. Mar 17 17:35:06.776478 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:35:06.776495 systemd[1]: Reloading... Mar 17 17:35:06.828198 zram_generator::config[1214]: No configuration found. Mar 17 17:35:06.857263 ldconfig[1151]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:35:06.914002 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:35:06.949224 systemd[1]: Reloading finished in 172 ms. Mar 17 17:35:06.979763 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:35:06.980982 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:35:06.995343 systemd[1]: Starting ensure-sysext.service... Mar 17 17:35:06.997002 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:35:07.009904 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:35:07.009924 systemd[1]: Reloading... Mar 17 17:35:07.017922 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:35:07.018190 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:35:07.018822 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:35:07.019030 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Mar 17 17:35:07.019083 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Mar 17 17:35:07.021032 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:35:07.021044 systemd-tmpfiles[1247]: Skipping /boot Mar 17 17:35:07.027694 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:35:07.027709 systemd-tmpfiles[1247]: Skipping /boot Mar 17 17:35:07.057217 zram_generator::config[1274]: No configuration found. Mar 17 17:35:07.138965 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:35:07.174127 systemd[1]: Reloading finished in 163 ms. Mar 17 17:35:07.188831 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:35:07.204620 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:35:07.214111 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:35:07.216519 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:35:07.218491 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:35:07.224616 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:35:07.234376 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:35:07.238486 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:35:07.244156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:35:07.256901 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:35:07.259032 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:35:07.262548 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:35:07.266336 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:35:07.268016 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:35:07.269675 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:35:07.271965 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:35:07.272103 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:35:07.273578 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:35:07.273708 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:35:07.274039 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Mar 17 17:35:07.278942 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:35:07.279090 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:35:07.282754 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:35:07.292518 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:35:07.295734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:35:07.298453 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:35:07.301791 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:35:07.305866 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:35:07.307703 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:35:07.312482 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:35:07.314429 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:35:07.314565 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:35:07.316909 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:35:07.318215 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:35:07.321052 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:35:07.322479 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:35:07.324407 augenrules[1369]: No rules Mar 17 17:35:07.324761 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:35:07.326152 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:35:07.326327 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:35:07.327593 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:35:07.327786 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:35:07.346843 systemd[1]: Finished ensure-sysext.service. Mar 17 17:35:07.363329 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:35:07.364185 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:35:07.366355 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:35:07.369038 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:35:07.369191 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1358) Mar 17 17:35:07.370757 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:35:07.375361 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:35:07.376240 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:35:07.378482 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:35:07.384477 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:35:07.387325 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:35:07.387804 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:35:07.387949 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:35:07.389023 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:35:07.389151 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:35:07.391373 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 17 17:35:07.401352 augenrules[1385]: /sbin/augenrules: No change Mar 17 17:35:07.402108 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:35:07.404126 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:35:07.404311 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:35:07.411980 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:35:07.414256 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:35:07.414550 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:35:07.414686 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:35:07.416165 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:35:07.418301 augenrules[1418]: No rules Mar 17 17:35:07.419460 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:35:07.419655 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:35:07.442165 systemd-resolved[1313]: Positive Trust Anchors: Mar 17 17:35:07.442597 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:35:07.442688 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:35:07.445689 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:35:07.455307 systemd-resolved[1313]: Defaulting to hostname 'linux'. Mar 17 17:35:07.460906 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:35:07.462189 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:35:07.471895 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:35:07.473059 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:35:07.481667 systemd-networkd[1398]: lo: Link UP Mar 17 17:35:07.481677 systemd-networkd[1398]: lo: Gained carrier Mar 17 17:35:07.482526 systemd-networkd[1398]: Enumeration completed Mar 17 17:35:07.482645 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:35:07.483819 systemd[1]: Reached target network.target - Network. Mar 17 17:35:07.486362 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:35:07.486375 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:35:07.487027 systemd-networkd[1398]: eth0: Link UP Mar 17 17:35:07.487034 systemd-networkd[1398]: eth0: Gained carrier Mar 17 17:35:07.487048 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:35:07.491342 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:35:07.496087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:35:07.501803 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:35:07.502405 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Mar 17 17:35:07.505213 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:35:07.506622 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:35:07.506671 systemd-timesyncd[1402]: Initial clock synchronization to Mon 2025-03-17 17:35:07.273420 UTC. Mar 17 17:35:07.508343 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:35:07.527829 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:35:07.547290 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:35:07.558846 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:35:07.560865 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:35:07.562205 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:35:07.563462 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:35:07.564900 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:35:07.566504 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:35:07.567968 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:35:07.569334 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:35:07.570723 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:35:07.570840 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:35:07.571879 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:35:07.573886 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:35:07.576324 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:35:07.585191 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:35:07.588076 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:35:07.589814 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:35:07.591015 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:35:07.592010 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:35:07.593073 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:35:07.593106 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:35:07.594071 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:35:07.597214 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:35:07.596220 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:35:07.600322 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:35:07.603363 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:35:07.604371 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:35:07.608370 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:35:07.611368 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:35:07.614537 jq[1446]: false Mar 17 17:35:07.614527 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:35:07.616491 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:35:07.622404 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:35:07.625108 extend-filesystems[1447]: Found loop3 Mar 17 17:35:07.625882 extend-filesystems[1447]: Found loop4 Mar 17 17:35:07.625882 extend-filesystems[1447]: Found loop5 Mar 17 17:35:07.625882 extend-filesystems[1447]: Found vda Mar 17 17:35:07.625882 extend-filesystems[1447]: Found vda1 Mar 17 17:35:07.625882 extend-filesystems[1447]: Found vda2 Mar 17 17:35:07.625882 extend-filesystems[1447]: Found vda3 Mar 17 17:35:07.625882 extend-filesystems[1447]: Found usr Mar 17 17:35:07.625882 extend-filesystems[1447]: Found vda4 Mar 17 17:35:07.625882 extend-filesystems[1447]: Found vda6 Mar 17 17:35:07.625882 extend-filesystems[1447]: Found vda7 Mar 17 17:35:07.625882 extend-filesystems[1447]: Found vda9 Mar 17 17:35:07.625882 extend-filesystems[1447]: Checking size of /dev/vda9 Mar 17 17:35:07.643768 extend-filesystems[1447]: Resized partition /dev/vda9 Mar 17 17:35:07.627821 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:35:07.628849 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:35:07.646220 jq[1465]: true Mar 17 17:35:07.630463 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:35:07.633732 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:35:07.637215 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:35:07.641583 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:35:07.641835 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:35:07.643700 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:35:07.646535 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:35:07.652335 dbus-daemon[1445]: [system] SELinux support is enabled Mar 17 17:35:07.651528 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:35:07.651712 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:35:07.652824 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:35:07.666195 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1357) Mar 17 17:35:07.669949 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:35:07.669980 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:35:07.672558 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:35:07.672583 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:35:07.677095 tar[1469]: linux-arm64/helm Mar 17 17:35:07.677797 jq[1471]: true Mar 17 17:35:07.678114 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:35:07.689198 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:35:07.689992 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:35:07.705873 update_engine[1459]: I20250317 17:35:07.705721 1459 main.cc:92] Flatcar Update Engine starting Mar 17 17:35:07.712485 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:35:07.717075 update_engine[1459]: I20250317 17:35:07.715533 1459 update_check_scheduler.cc:74] Next update check in 10m0s Mar 17 17:35:07.719459 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:35:07.734705 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:35:07.751801 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:35:07.760711 systemd-logind[1453]: New seat seat0. Mar 17 17:35:07.765358 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:35:07.765358 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:35:07.765358 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:35:07.776996 extend-filesystems[1447]: Resized filesystem in /dev/vda9 Mar 17 17:35:07.779526 bash[1499]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:35:07.766099 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:35:07.766293 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:35:07.773169 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:35:07.779570 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:35:07.781672 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:35:07.797934 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:35:07.912256 containerd[1473]: time="2025-03-17T17:35:07.912021760Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:35:07.942823 containerd[1473]: time="2025-03-17T17:35:07.942654600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:35:07.944158 containerd[1473]: time="2025-03-17T17:35:07.944116120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:35:07.944158 containerd[1473]: time="2025-03-17T17:35:07.944150280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:35:07.944251 containerd[1473]: time="2025-03-17T17:35:07.944168240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:35:07.944522 containerd[1473]: time="2025-03-17T17:35:07.944334440Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:35:07.944522 containerd[1473]: time="2025-03-17T17:35:07.944359280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:35:07.944522 containerd[1473]: time="2025-03-17T17:35:07.944415760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:35:07.944522 containerd[1473]: time="2025-03-17T17:35:07.944427840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:35:07.944606 containerd[1473]: time="2025-03-17T17:35:07.944577800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:35:07.944606 containerd[1473]: time="2025-03-17T17:35:07.944591720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:35:07.944651 containerd[1473]: time="2025-03-17T17:35:07.944604920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:35:07.944651 containerd[1473]: time="2025-03-17T17:35:07.944614320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:35:07.945146 containerd[1473]: time="2025-03-17T17:35:07.944692040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:35:07.945146 containerd[1473]: time="2025-03-17T17:35:07.944887360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:35:07.945146 containerd[1473]: time="2025-03-17T17:35:07.944977680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:35:07.945146 containerd[1473]: time="2025-03-17T17:35:07.944991320Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:35:07.945146 containerd[1473]: time="2025-03-17T17:35:07.945064640Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:35:07.945146 containerd[1473]: time="2025-03-17T17:35:07.945102080Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:35:07.948320 containerd[1473]: time="2025-03-17T17:35:07.948288560Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:35:07.948389 containerd[1473]: time="2025-03-17T17:35:07.948342000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:35:07.948389 containerd[1473]: time="2025-03-17T17:35:07.948358640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:35:07.948389 containerd[1473]: time="2025-03-17T17:35:07.948375400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:35:07.948440 containerd[1473]: time="2025-03-17T17:35:07.948392400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:35:07.948543 containerd[1473]: time="2025-03-17T17:35:07.948518160Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:35:07.948760 containerd[1473]: time="2025-03-17T17:35:07.948736440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:35:07.949102 containerd[1473]: time="2025-03-17T17:35:07.948848560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:35:07.949102 containerd[1473]: time="2025-03-17T17:35:07.948870240Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:35:07.949102 containerd[1473]: time="2025-03-17T17:35:07.948886960Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:35:07.949102 containerd[1473]: time="2025-03-17T17:35:07.948900240Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:35:07.949102 containerd[1473]: time="2025-03-17T17:35:07.948913080Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:35:07.949102 containerd[1473]: time="2025-03-17T17:35:07.948924840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:35:07.949102 containerd[1473]: time="2025-03-17T17:35:07.948937880Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:35:07.949102 containerd[1473]: time="2025-03-17T17:35:07.948951840Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:35:07.949102 containerd[1473]: time="2025-03-17T17:35:07.948963760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:35:07.949102 containerd[1473]: time="2025-03-17T17:35:07.948974960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:35:07.949102 containerd[1473]: time="2025-03-17T17:35:07.948986320Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:35:07.949102 containerd[1473]: time="2025-03-17T17:35:07.949005840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:35:07.949102 containerd[1473]: time="2025-03-17T17:35:07.949019840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:35:07.949102 containerd[1473]: time="2025-03-17T17:35:07.949032680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:35:07.949381 containerd[1473]: time="2025-03-17T17:35:07.949045160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:35:07.949381 containerd[1473]: time="2025-03-17T17:35:07.949056120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:35:07.949381 containerd[1473]: time="2025-03-17T17:35:07.949069360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:35:07.949381 containerd[1473]: time="2025-03-17T17:35:07.949080080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:35:07.949381 containerd[1473]: time="2025-03-17T17:35:07.949091440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:35:07.949381 containerd[1473]: time="2025-03-17T17:35:07.949102640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:35:07.949381 containerd[1473]: time="2025-03-17T17:35:07.949116360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:35:07.949381 containerd[1473]: time="2025-03-17T17:35:07.949127480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:35:07.949381 containerd[1473]: time="2025-03-17T17:35:07.949140160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:35:07.949381 containerd[1473]: time="2025-03-17T17:35:07.949152280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:35:07.949381 containerd[1473]: time="2025-03-17T17:35:07.949170720Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:35:07.949381 containerd[1473]: time="2025-03-17T17:35:07.949209200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:35:07.949381 containerd[1473]: time="2025-03-17T17:35:07.949222720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:35:07.949381 containerd[1473]: time="2025-03-17T17:35:07.949233560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:35:07.949607 containerd[1473]: time="2025-03-17T17:35:07.949398120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:35:07.949607 containerd[1473]: time="2025-03-17T17:35:07.949414040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:35:07.949607 containerd[1473]: time="2025-03-17T17:35:07.949423680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:35:07.949607 containerd[1473]: time="2025-03-17T17:35:07.949435560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:35:07.949607 containerd[1473]: time="2025-03-17T17:35:07.949444680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:35:07.949607 containerd[1473]: time="2025-03-17T17:35:07.949456720Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:35:07.949607 containerd[1473]: time="2025-03-17T17:35:07.949466800Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:35:07.949607 containerd[1473]: time="2025-03-17T17:35:07.949477760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:35:07.950398 containerd[1473]: time="2025-03-17T17:35:07.949737920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:35:07.950398 containerd[1473]: time="2025-03-17T17:35:07.949789880Z" level=info msg="Connect containerd service" Mar 17 17:35:07.950398 containerd[1473]: time="2025-03-17T17:35:07.949821120Z" level=info msg="using legacy CRI server" Mar 17 17:35:07.950398 containerd[1473]: time="2025-03-17T17:35:07.949827240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:35:07.950398 containerd[1473]: time="2025-03-17T17:35:07.950052000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:35:07.953817 containerd[1473]: time="2025-03-17T17:35:07.952557400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:35:07.953817 containerd[1473]: time="2025-03-17T17:35:07.952751120Z" level=info msg="Start subscribing containerd event" Mar 17 17:35:07.953817 containerd[1473]: time="2025-03-17T17:35:07.952796840Z" level=info msg="Start recovering state" Mar 17 17:35:07.953817 containerd[1473]: time="2025-03-17T17:35:07.952855280Z" level=info msg="Start event monitor" Mar 17 17:35:07.953817 containerd[1473]: time="2025-03-17T17:35:07.952866720Z" level=info msg="Start snapshots syncer" Mar 17 17:35:07.953817 containerd[1473]: time="2025-03-17T17:35:07.952875600Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:35:07.953817 containerd[1473]: time="2025-03-17T17:35:07.952883560Z" level=info msg="Start streaming server" Mar 17 17:35:07.953817 containerd[1473]: time="2025-03-17T17:35:07.953769560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:35:07.953817 containerd[1473]: time="2025-03-17T17:35:07.953813800Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:35:07.954917 containerd[1473]: time="2025-03-17T17:35:07.953873920Z" level=info msg="containerd successfully booted in 0.045298s" Mar 17 17:35:07.953954 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:35:08.044614 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:35:08.057728 tar[1469]: linux-arm64/LICENSE Mar 17 17:35:08.057808 tar[1469]: linux-arm64/README.md Mar 17 17:35:08.082017 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:35:08.084758 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:35:08.091452 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:35:08.099291 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:35:08.100271 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:35:08.105439 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:35:08.115963 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:35:08.119085 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:35:08.121317 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 17:35:08.122682 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:35:09.090292 systemd-networkd[1398]: eth0: Gained IPv6LL Mar 17 17:35:09.092851 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:35:09.094642 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:35:09.110731 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:35:09.113051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:35:09.115294 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:35:09.132369 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:35:09.133750 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:35:09.135552 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:35:09.138246 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:35:09.591731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:35:09.593024 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:35:09.596140 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:35:09.596299 systemd[1]: Startup finished in 550ms (kernel) + 4.252s (initrd) + 3.670s (userspace) = 8.473s. Mar 17 17:35:09.993242 kubelet[1557]: E0317 17:35:09.993114 1557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:35:09.995789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:35:09.995936 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:35:14.225870 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:35:14.226945 systemd[1]: Started sshd@0-10.0.0.106:22-10.0.0.1:48282.service - OpenSSH per-connection server daemon (10.0.0.1:48282). Mar 17 17:35:14.282799 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 48282 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:35:14.284171 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:14.295207 systemd-logind[1453]: New session 1 of user core. Mar 17 17:35:14.296223 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:35:14.306435 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:35:14.317211 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:35:14.319423 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:35:14.325821 (systemd)[1574]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:35:14.392360 systemd[1574]: Queued start job for default target default.target. Mar 17 17:35:14.401040 systemd[1574]: Created slice app.slice - User Application Slice. Mar 17 17:35:14.401083 systemd[1574]: Reached target paths.target - Paths. Mar 17 17:35:14.401095 systemd[1574]: Reached target timers.target - Timers. Mar 17 17:35:14.402324 systemd[1574]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:35:14.412416 systemd[1574]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:35:14.412475 systemd[1574]: Reached target sockets.target - Sockets. Mar 17 17:35:14.412487 systemd[1574]: Reached target basic.target - Basic System. Mar 17 17:35:14.412524 systemd[1574]: Reached target default.target - Main User Target. Mar 17 17:35:14.412550 systemd[1574]: Startup finished in 81ms. Mar 17 17:35:14.412881 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:35:14.414386 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:35:14.475838 systemd[1]: Started sshd@1-10.0.0.106:22-10.0.0.1:48298.service - OpenSSH per-connection server daemon (10.0.0.1:48298). Mar 17 17:35:14.513779 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 48298 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:35:14.514884 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:14.519881 systemd-logind[1453]: New session 2 of user core. Mar 17 17:35:14.532395 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:35:14.583510 sshd[1587]: Connection closed by 10.0.0.1 port 48298 Mar 17 17:35:14.584362 sshd-session[1585]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:14.592494 systemd[1]: sshd@1-10.0.0.106:22-10.0.0.1:48298.service: Deactivated successfully. Mar 17 17:35:14.593856 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:35:14.596305 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:35:14.603495 systemd[1]: Started sshd@2-10.0.0.106:22-10.0.0.1:48300.service - OpenSSH per-connection server daemon (10.0.0.1:48300). Mar 17 17:35:14.604489 systemd-logind[1453]: Removed session 2. Mar 17 17:35:14.638204 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 48300 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:35:14.638837 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:14.643072 systemd-logind[1453]: New session 3 of user core. Mar 17 17:35:14.652351 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:35:14.700528 sshd[1595]: Connection closed by 10.0.0.1 port 48300 Mar 17 17:35:14.700996 sshd-session[1592]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:14.713565 systemd[1]: sshd@2-10.0.0.106:22-10.0.0.1:48300.service: Deactivated successfully. Mar 17 17:35:14.714932 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:35:14.717533 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:35:14.718280 systemd[1]: Started sshd@3-10.0.0.106:22-10.0.0.1:48310.service - OpenSSH per-connection server daemon (10.0.0.1:48310). Mar 17 17:35:14.719411 systemd-logind[1453]: Removed session 3. Mar 17 17:35:14.756598 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 48310 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:35:14.757979 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:14.761517 systemd-logind[1453]: New session 4 of user core. Mar 17 17:35:14.773324 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:35:14.823774 sshd[1602]: Connection closed by 10.0.0.1 port 48310 Mar 17 17:35:14.824388 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:14.833608 systemd[1]: sshd@3-10.0.0.106:22-10.0.0.1:48310.service: Deactivated successfully. Mar 17 17:35:14.835081 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:35:14.836234 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:35:14.837478 systemd[1]: Started sshd@4-10.0.0.106:22-10.0.0.1:48318.service - OpenSSH per-connection server daemon (10.0.0.1:48318). Mar 17 17:35:14.838269 systemd-logind[1453]: Removed session 4. Mar 17 17:35:14.875669 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 48318 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:35:14.876945 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:14.882544 systemd-logind[1453]: New session 5 of user core. Mar 17 17:35:14.895391 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:35:14.964391 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:35:14.966721 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:35:15.280484 (dockerd)[1631]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:35:15.281074 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:35:15.537335 dockerd[1631]: time="2025-03-17T17:35:15.537216842Z" level=info msg="Starting up" Mar 17 17:35:15.799842 dockerd[1631]: time="2025-03-17T17:35:15.799734310Z" level=info msg="Loading containers: start." Mar 17 17:35:15.918194 kernel: Initializing XFRM netlink socket Mar 17 17:35:15.988679 systemd-networkd[1398]: docker0: Link UP Mar 17 17:35:16.020466 dockerd[1631]: time="2025-03-17T17:35:16.020401540Z" level=info msg="Loading containers: done." Mar 17 17:35:16.036476 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck976268483-merged.mount: Deactivated successfully. Mar 17 17:35:16.038022 dockerd[1631]: time="2025-03-17T17:35:16.037513438Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:35:16.038022 dockerd[1631]: time="2025-03-17T17:35:16.037611051Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:35:16.038022 dockerd[1631]: time="2025-03-17T17:35:16.037710683Z" level=info msg="Daemon has completed initialization" Mar 17 17:35:16.067730 dockerd[1631]: time="2025-03-17T17:35:16.067596987Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:35:16.067801 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:35:16.641004 containerd[1473]: time="2025-03-17T17:35:16.640956657Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 17 17:35:17.234678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1345556053.mount: Deactivated successfully. Mar 17 17:35:18.243206 containerd[1473]: time="2025-03-17T17:35:18.240530981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:18.243206 containerd[1473]: time="2025-03-17T17:35:18.240885760Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=25552768" Mar 17 17:35:18.243206 containerd[1473]: time="2025-03-17T17:35:18.241737024Z" level=info msg="ImageCreate event name:\"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:18.245841 containerd[1473]: time="2025-03-17T17:35:18.245536712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:18.247191 containerd[1473]: time="2025-03-17T17:35:18.246694211Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"25549566\" in 1.605697025s" Mar 17 17:35:18.247191 containerd[1473]: time="2025-03-17T17:35:18.246741367Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 17 17:35:18.247686 containerd[1473]: time="2025-03-17T17:35:18.247661062Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 17 17:35:19.557050 containerd[1473]: time="2025-03-17T17:35:19.556997147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:19.558070 containerd[1473]: time="2025-03-17T17:35:19.558010821Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=22458980" Mar 17 17:35:19.558259 containerd[1473]: time="2025-03-17T17:35:19.558228869Z" level=info msg="ImageCreate event name:\"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:19.560986 containerd[1473]: time="2025-03-17T17:35:19.560943220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:19.563142 containerd[1473]: time="2025-03-17T17:35:19.563002315Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"23899774\" in 1.315309689s" Mar 17 17:35:19.563142 containerd[1473]: time="2025-03-17T17:35:19.563050470Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 17 17:35:19.563592 containerd[1473]: time="2025-03-17T17:35:19.563574134Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 17 17:35:20.246227 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:35:20.256349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:35:20.344967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:35:20.348376 (kubelet)[1892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:35:20.389793 kubelet[1892]: E0317 17:35:20.389739 1892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:35:20.392714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:35:20.392853 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:35:20.854718 containerd[1473]: time="2025-03-17T17:35:20.854661908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:20.855206 containerd[1473]: time="2025-03-17T17:35:20.855151737Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=17125831" Mar 17 17:35:20.856101 containerd[1473]: time="2025-03-17T17:35:20.856045301Z" level=info msg="ImageCreate event name:\"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:20.859074 containerd[1473]: time="2025-03-17T17:35:20.859001051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:20.860255 containerd[1473]: time="2025-03-17T17:35:20.860225662Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"18566643\" in 1.296566455s" Mar 17 17:35:20.860304 containerd[1473]: time="2025-03-17T17:35:20.860256559Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 17 17:35:20.860725 containerd[1473]: time="2025-03-17T17:35:20.860681570Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 17:35:21.797856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2216576401.mount: Deactivated successfully. Mar 17 17:35:22.097261 containerd[1473]: time="2025-03-17T17:35:22.097169659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:22.098115 containerd[1473]: time="2025-03-17T17:35:22.098068920Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=26871917" Mar 17 17:35:22.099248 containerd[1473]: time="2025-03-17T17:35:22.098664233Z" level=info msg="ImageCreate event name:\"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:22.100507 containerd[1473]: time="2025-03-17T17:35:22.100473865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:22.101331 containerd[1473]: time="2025-03-17T17:35:22.101299378Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"26870934\" in 1.24058157s" Mar 17 17:35:22.101331 containerd[1473]: time="2025-03-17T17:35:22.101328527Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 17 17:35:22.101898 containerd[1473]: time="2025-03-17T17:35:22.101779850Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:35:22.635116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2079771249.mount: Deactivated successfully. Mar 17 17:35:23.212455 containerd[1473]: time="2025-03-17T17:35:23.212398262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:23.214359 containerd[1473]: time="2025-03-17T17:35:23.214310355Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Mar 17 17:35:23.214978 containerd[1473]: time="2025-03-17T17:35:23.214941503Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:23.219008 containerd[1473]: time="2025-03-17T17:35:23.218968488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:23.219898 containerd[1473]: time="2025-03-17T17:35:23.219710957Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.117802044s" Mar 17 17:35:23.219898 containerd[1473]: time="2025-03-17T17:35:23.219755979Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 17:35:23.220408 containerd[1473]: time="2025-03-17T17:35:23.220385415Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 17:35:23.746149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2021090865.mount: Deactivated successfully. Mar 17 17:35:23.751024 containerd[1473]: time="2025-03-17T17:35:23.750979163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:23.751472 containerd[1473]: time="2025-03-17T17:35:23.751427035Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Mar 17 17:35:23.752221 containerd[1473]: time="2025-03-17T17:35:23.752160301Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:23.754205 containerd[1473]: time="2025-03-17T17:35:23.754164071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:23.755333 containerd[1473]: time="2025-03-17T17:35:23.755299509Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 534.883774ms" Mar 17 17:35:23.755370 containerd[1473]: time="2025-03-17T17:35:23.755334093Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 17 17:35:23.756088 containerd[1473]: time="2025-03-17T17:35:23.756052457Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 17 17:35:24.217734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261955887.mount: Deactivated successfully. Mar 17 17:35:25.767187 containerd[1473]: time="2025-03-17T17:35:25.767134032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:25.767963 containerd[1473]: time="2025-03-17T17:35:25.767702077Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Mar 17 17:35:25.768625 containerd[1473]: time="2025-03-17T17:35:25.768590435Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:25.772046 containerd[1473]: time="2025-03-17T17:35:25.772009910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:25.773372 containerd[1473]: time="2025-03-17T17:35:25.773324620Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.017232312s" Mar 17 17:35:25.773372 containerd[1473]: time="2025-03-17T17:35:25.773363383Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 17 17:35:30.571782 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:35:30.581357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:35:30.668826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:35:30.672980 (kubelet)[2046]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:35:30.775574 kubelet[2046]: E0317 17:35:30.775520 2046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:35:30.777723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:35:30.777843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:35:30.945272 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:35:30.953404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:35:30.973471 systemd[1]: Reloading requested from client PID 2063 ('systemctl') (unit session-5.scope)... Mar 17 17:35:30.973491 systemd[1]: Reloading... Mar 17 17:35:31.050228 zram_generator::config[2105]: No configuration found. Mar 17 17:35:31.154631 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:35:31.206837 systemd[1]: Reloading finished in 233 ms. Mar 17 17:35:31.256017 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:35:31.259592 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:35:31.259942 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:35:31.262372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:35:31.355840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:35:31.360035 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:35:31.396279 kubelet[2149]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:35:31.396279 kubelet[2149]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:35:31.396279 kubelet[2149]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:35:31.396597 kubelet[2149]: I0317 17:35:31.396385 2149 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:35:31.912930 kubelet[2149]: I0317 17:35:31.912880 2149 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:35:31.912930 kubelet[2149]: I0317 17:35:31.912914 2149 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:35:31.913184 kubelet[2149]: I0317 17:35:31.913151 2149 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:35:31.948979 kubelet[2149]: E0317 17:35:31.948919 2149 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:35:31.949623 kubelet[2149]: I0317 17:35:31.949607 2149 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:35:31.957062 kubelet[2149]: E0317 17:35:31.957016 2149 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:35:31.957062 kubelet[2149]: I0317 17:35:31.957049 2149 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:35:31.960337 kubelet[2149]: I0317 17:35:31.960303 2149 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:35:31.961187 kubelet[2149]: I0317 17:35:31.961154 2149 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:35:31.961342 kubelet[2149]: I0317 17:35:31.961310 2149 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:35:31.961510 kubelet[2149]: I0317 17:35:31.961333 2149 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:35:31.961648 kubelet[2149]: I0317 17:35:31.961628 2149 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:35:31.961648 kubelet[2149]: I0317 17:35:31.961639 2149 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:35:31.961832 kubelet[2149]: I0317 17:35:31.961811 2149 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:35:31.963715 kubelet[2149]: I0317 17:35:31.963683 2149 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:35:31.963763 kubelet[2149]: I0317 17:35:31.963720 2149 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:35:31.963763 kubelet[2149]: I0317 17:35:31.963754 2149 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:35:31.963804 kubelet[2149]: I0317 17:35:31.963766 2149 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:35:31.965048 kubelet[2149]: W0317 17:35:31.964946 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Mar 17 17:35:31.965048 kubelet[2149]: W0317 17:35:31.964968 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Mar 17 17:35:31.965048 kubelet[2149]: E0317 17:35:31.965005 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:35:31.965048 kubelet[2149]: E0317 17:35:31.965011 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:35:31.967915 kubelet[2149]: I0317 17:35:31.967679 2149 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:35:31.969605 kubelet[2149]: I0317 17:35:31.969587 2149 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:35:31.970386 kubelet[2149]: W0317 17:35:31.970363 2149 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:35:31.971515 kubelet[2149]: I0317 17:35:31.971495 2149 server.go:1269] "Started kubelet" Mar 17 17:35:31.974246 kubelet[2149]: I0317 17:35:31.973166 2149 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:35:31.974246 kubelet[2149]: I0317 17:35:31.974063 2149 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:35:31.974361 kubelet[2149]: I0317 17:35:31.972710 2149 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:35:31.977975 kubelet[2149]: I0317 17:35:31.972803 2149 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:35:31.978462 kubelet[2149]: I0317 17:35:31.978401 2149 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:35:31.978650 kubelet[2149]: I0317 17:35:31.978623 2149 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:35:31.978828 kubelet[2149]: I0317 17:35:31.978811 2149 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:35:31.978939 kubelet[2149]: I0317 17:35:31.978922 2149 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:35:31.978988 kubelet[2149]: I0317 17:35:31.978971 2149 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:35:31.979316 kubelet[2149]: W0317 17:35:31.979265 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Mar 17 17:35:31.979372 kubelet[2149]: E0317 17:35:31.979319 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:35:31.979518 kubelet[2149]: E0317 17:35:31.979478 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:35:31.979565 kubelet[2149]: E0317 17:35:31.979556 2149 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:35:31.979714 kubelet[2149]: I0317 17:35:31.979681 2149 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:35:31.979762 kubelet[2149]: I0317 17:35:31.979753 2149 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:35:31.979925 kubelet[2149]: E0317 17:35:31.979665 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="200ms" Mar 17 17:35:31.980028 kubelet[2149]: E0317 17:35:31.978861 2149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.106:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da79ef225a2d8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:35:31.971474136 +0000 UTC m=+0.608331437,LastTimestamp:2025-03-17 17:35:31.971474136 +0000 UTC m=+0.608331437,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:35:31.982566 kubelet[2149]: I0317 17:35:31.982535 2149 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:35:31.990631 kubelet[2149]: I0317 17:35:31.989154 2149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:35:31.990631 kubelet[2149]: I0317 17:35:31.990150 2149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:35:31.990631 kubelet[2149]: I0317 17:35:31.990200 2149 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:35:31.990631 kubelet[2149]: I0317 17:35:31.990215 2149 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:35:31.990631 kubelet[2149]: E0317 17:35:31.990264 2149 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:35:31.991094 kubelet[2149]: W0317 17:35:31.990986 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Mar 17 17:35:31.991094 kubelet[2149]: E0317 17:35:31.991029 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:35:31.996259 kubelet[2149]: I0317 17:35:31.996234 2149 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:35:31.996259 kubelet[2149]: I0317 17:35:31.996252 2149 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:35:31.996356 kubelet[2149]: I0317 17:35:31.996269 2149 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:35:32.056820 kubelet[2149]: I0317 17:35:32.056776 2149 policy_none.go:49] "None policy: Start" Mar 17 17:35:32.057598 kubelet[2149]: I0317 17:35:32.057572 2149 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:35:32.057710 kubelet[2149]: I0317 17:35:32.057606 2149 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:35:32.063134 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:35:32.076562 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:35:32.079167 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:35:32.079626 kubelet[2149]: E0317 17:35:32.079595 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:35:32.086982 kubelet[2149]: I0317 17:35:32.086955 2149 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:35:32.087161 kubelet[2149]: I0317 17:35:32.087138 2149 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:35:32.087209 kubelet[2149]: I0317 17:35:32.087154 2149 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:35:32.087466 kubelet[2149]: I0317 17:35:32.087450 2149 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:35:32.088979 kubelet[2149]: E0317 17:35:32.088948 2149 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 17:35:32.098276 systemd[1]: Created slice kubepods-burstable-pod9c941da516f522e3e32dcbfbb687a302.slice - libcontainer container kubepods-burstable-pod9c941da516f522e3e32dcbfbb687a302.slice. Mar 17 17:35:32.112589 systemd[1]: Created slice kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice - libcontainer container kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice. Mar 17 17:35:32.135127 systemd[1]: Created slice kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice - libcontainer container kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice. Mar 17 17:35:32.179346 kubelet[2149]: I0317 17:35:32.179230 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c941da516f522e3e32dcbfbb687a302-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9c941da516f522e3e32dcbfbb687a302\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:35:32.180885 kubelet[2149]: E0317 17:35:32.180849 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="400ms" Mar 17 17:35:32.188949 kubelet[2149]: I0317 17:35:32.188926 2149 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:35:32.189410 kubelet[2149]: E0317 17:35:32.189386 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Mar 17 17:35:32.279653 kubelet[2149]: I0317 17:35:32.279583 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:35:32.279653 kubelet[2149]: I0317 17:35:32.279628 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:35:32.279653 kubelet[2149]: I0317 17:35:32.279650 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:35:32.279914 kubelet[2149]: I0317 17:35:32.279667 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c941da516f522e3e32dcbfbb687a302-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9c941da516f522e3e32dcbfbb687a302\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:35:32.279914 kubelet[2149]: I0317 17:35:32.279683 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c941da516f522e3e32dcbfbb687a302-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9c941da516f522e3e32dcbfbb687a302\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:35:32.279914 kubelet[2149]: I0317 17:35:32.279698 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:35:32.279914 kubelet[2149]: I0317 17:35:32.279714 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:35:32.279914 kubelet[2149]: I0317 17:35:32.279734 2149 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:35:32.390762 kubelet[2149]: I0317 17:35:32.390722 2149 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:35:32.391058 kubelet[2149]: E0317 17:35:32.391023 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Mar 17 17:35:32.411170 kubelet[2149]: E0317 17:35:32.411141 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:32.411997 containerd[1473]: time="2025-03-17T17:35:32.411952632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9c941da516f522e3e32dcbfbb687a302,Namespace:kube-system,Attempt:0,}" Mar 17 17:35:32.433194 kubelet[2149]: E0317 17:35:32.433068 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:32.433679 containerd[1473]: time="2025-03-17T17:35:32.433639366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,}" Mar 17 17:35:32.438148 kubelet[2149]: E0317 17:35:32.438081 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:32.438524 containerd[1473]: time="2025-03-17T17:35:32.438496737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,}" Mar 17 17:35:32.582241 kubelet[2149]: E0317 17:35:32.582155 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="800ms" Mar 17 17:35:32.792348 kubelet[2149]: I0317 17:35:32.792230 2149 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:35:32.792703 kubelet[2149]: E0317 17:35:32.792663 2149 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Mar 17 17:35:32.849858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3708666526.mount: Deactivated successfully. Mar 17 17:35:32.855152 containerd[1473]: time="2025-03-17T17:35:32.854664994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:35:32.857081 containerd[1473]: time="2025-03-17T17:35:32.856997473Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Mar 17 17:35:32.857829 containerd[1473]: time="2025-03-17T17:35:32.857797047Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:35:32.861191 containerd[1473]: time="2025-03-17T17:35:32.860950115Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:35:32.861922 containerd[1473]: time="2025-03-17T17:35:32.861883011Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:35:32.863276 containerd[1473]: time="2025-03-17T17:35:32.863243042Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:35:32.863947 containerd[1473]: time="2025-03-17T17:35:32.863846408Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:35:32.866580 containerd[1473]: time="2025-03-17T17:35:32.866550088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:35:32.867601 containerd[1473]: time="2025-03-17T17:35:32.867253975Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 455.21812ms" Mar 17 17:35:32.870821 containerd[1473]: time="2025-03-17T17:35:32.870787433Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 432.226092ms" Mar 17 17:35:32.872158 containerd[1473]: time="2025-03-17T17:35:32.872119736Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 438.398587ms" Mar 17 17:35:32.965125 kubelet[2149]: W0317 17:35:32.965051 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Mar 17 17:35:32.965297 kubelet[2149]: E0317 17:35:32.965124 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:35:33.009926 containerd[1473]: time="2025-03-17T17:35:33.009805141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:35:33.009926 containerd[1473]: time="2025-03-17T17:35:33.009892251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:35:33.009926 containerd[1473]: time="2025-03-17T17:35:33.009908314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:35:33.010292 containerd[1473]: time="2025-03-17T17:35:33.009964735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:35:33.010292 containerd[1473]: time="2025-03-17T17:35:33.009991588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:35:33.010292 containerd[1473]: time="2025-03-17T17:35:33.010008410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:35:33.010292 containerd[1473]: time="2025-03-17T17:35:33.009828956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:35:33.010292 containerd[1473]: time="2025-03-17T17:35:33.010056880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:35:33.010292 containerd[1473]: time="2025-03-17T17:35:33.010077379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:35:33.010292 containerd[1473]: time="2025-03-17T17:35:33.010236814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:35:33.011245 containerd[1473]: time="2025-03-17T17:35:33.010781170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:35:33.011245 containerd[1473]: time="2025-03-17T17:35:33.010876511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:35:33.035388 systemd[1]: Started cri-containerd-db4d951cf324b7160cfeee001b11260a2785e6f5285f919b69b96cddfc7f3472.scope - libcontainer container db4d951cf324b7160cfeee001b11260a2785e6f5285f919b69b96cddfc7f3472. Mar 17 17:35:33.039183 systemd[1]: Started cri-containerd-9293a3341c8edc0d6ec441d4b0577f36d3cf987af60d387c24dbc8ada045a736.scope - libcontainer container 9293a3341c8edc0d6ec441d4b0577f36d3cf987af60d387c24dbc8ada045a736. Mar 17 17:35:33.040987 systemd[1]: Started cri-containerd-ce1c2b0ce477d36502cfa23087870fa9c195fc16df17f8d0b0478407283cc63e.scope - libcontainer container ce1c2b0ce477d36502cfa23087870fa9c195fc16df17f8d0b0478407283cc63e. Mar 17 17:35:33.072089 containerd[1473]: time="2025-03-17T17:35:33.072054886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9293a3341c8edc0d6ec441d4b0577f36d3cf987af60d387c24dbc8ada045a736\"" Mar 17 17:35:33.073652 kubelet[2149]: E0317 17:35:33.073444 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:33.076728 kubelet[2149]: W0317 17:35:33.074332 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Mar 17 17:35:33.076728 kubelet[2149]: E0317 17:35:33.074469 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:35:33.078125 containerd[1473]: time="2025-03-17T17:35:33.078060787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9c941da516f522e3e32dcbfbb687a302,Namespace:kube-system,Attempt:0,} returns sandbox id \"db4d951cf324b7160cfeee001b11260a2785e6f5285f919b69b96cddfc7f3472\"" Mar 17 17:35:33.078311 containerd[1473]: time="2025-03-17T17:35:33.078246115Z" level=info msg="CreateContainer within sandbox \"9293a3341c8edc0d6ec441d4b0577f36d3cf987af60d387c24dbc8ada045a736\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:35:33.080369 containerd[1473]: time="2025-03-17T17:35:33.080341466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce1c2b0ce477d36502cfa23087870fa9c195fc16df17f8d0b0478407283cc63e\"" Mar 17 17:35:33.080689 kubelet[2149]: E0317 17:35:33.080668 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:33.081005 kubelet[2149]: E0317 17:35:33.080980 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:33.083009 containerd[1473]: time="2025-03-17T17:35:33.082979934Z" level=info msg="CreateContainer within sandbox \"ce1c2b0ce477d36502cfa23087870fa9c195fc16df17f8d0b0478407283cc63e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:35:33.084009 containerd[1473]: time="2025-03-17T17:35:33.083986451Z" level=info msg="CreateContainer within sandbox \"db4d951cf324b7160cfeee001b11260a2785e6f5285f919b69b96cddfc7f3472\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:35:33.101740 containerd[1473]: time="2025-03-17T17:35:33.101689441Z" level=info msg="CreateContainer within sandbox \"9293a3341c8edc0d6ec441d4b0577f36d3cf987af60d387c24dbc8ada045a736\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f976c7cc465099d5696cab1e7984f92aa5f87fa9f9060aac3b75813b40e727d2\"" Mar 17 17:35:33.102334 containerd[1473]: time="2025-03-17T17:35:33.102304804Z" level=info msg="StartContainer for \"f976c7cc465099d5696cab1e7984f92aa5f87fa9f9060aac3b75813b40e727d2\"" Mar 17 17:35:33.102777 containerd[1473]: time="2025-03-17T17:35:33.102747306Z" level=info msg="CreateContainer within sandbox \"ce1c2b0ce477d36502cfa23087870fa9c195fc16df17f8d0b0478407283cc63e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"958ade7fd578236734530e54e496c1a1bb1afc9d9afb94ffcc1968f0b6ce3c7a\"" Mar 17 17:35:33.103146 containerd[1473]: time="2025-03-17T17:35:33.103118202Z" level=info msg="StartContainer for \"958ade7fd578236734530e54e496c1a1bb1afc9d9afb94ffcc1968f0b6ce3c7a\"" Mar 17 17:35:33.104384 containerd[1473]: time="2025-03-17T17:35:33.104282077Z" level=info msg="CreateContainer within sandbox \"db4d951cf324b7160cfeee001b11260a2785e6f5285f919b69b96cddfc7f3472\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8323feb1f8aa0e2fb9aaf90cb5bc635deb37b04416df32bc110b7323a880249f\"" Mar 17 17:35:33.104655 containerd[1473]: time="2025-03-17T17:35:33.104628958Z" level=info msg="StartContainer for \"8323feb1f8aa0e2fb9aaf90cb5bc635deb37b04416df32bc110b7323a880249f\"" Mar 17 17:35:33.136367 systemd[1]: Started cri-containerd-8323feb1f8aa0e2fb9aaf90cb5bc635deb37b04416df32bc110b7323a880249f.scope - libcontainer container 8323feb1f8aa0e2fb9aaf90cb5bc635deb37b04416df32bc110b7323a880249f. Mar 17 17:35:33.137741 systemd[1]: Started cri-containerd-958ade7fd578236734530e54e496c1a1bb1afc9d9afb94ffcc1968f0b6ce3c7a.scope - libcontainer container 958ade7fd578236734530e54e496c1a1bb1afc9d9afb94ffcc1968f0b6ce3c7a. Mar 17 17:35:33.138919 systemd[1]: Started cri-containerd-f976c7cc465099d5696cab1e7984f92aa5f87fa9f9060aac3b75813b40e727d2.scope - libcontainer container f976c7cc465099d5696cab1e7984f92aa5f87fa9f9060aac3b75813b40e727d2. Mar 17 17:35:33.215444 containerd[1473]: time="2025-03-17T17:35:33.212451915Z" level=info msg="StartContainer for \"958ade7fd578236734530e54e496c1a1bb1afc9d9afb94ffcc1968f0b6ce3c7a\" returns successfully" Mar 17 17:35:33.215444 containerd[1473]: time="2025-03-17T17:35:33.212619302Z" level=info msg="StartContainer for \"f976c7cc465099d5696cab1e7984f92aa5f87fa9f9060aac3b75813b40e727d2\" returns successfully" Mar 17 17:35:33.215444 containerd[1473]: time="2025-03-17T17:35:33.212644755Z" level=info msg="StartContainer for \"8323feb1f8aa0e2fb9aaf90cb5bc635deb37b04416df32bc110b7323a880249f\" returns successfully" Mar 17 17:35:33.304216 kubelet[2149]: W0317 17:35:33.303779 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Mar 17 17:35:33.304216 kubelet[2149]: E0317 17:35:33.303851 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:35:33.307555 kubelet[2149]: W0317 17:35:33.307483 2149 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Mar 17 17:35:33.307555 kubelet[2149]: E0317 17:35:33.307533 2149 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:35:33.383540 kubelet[2149]: E0317 17:35:33.383407 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="1.6s" Mar 17 17:35:33.596244 kubelet[2149]: I0317 17:35:33.596210 2149 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:35:34.000961 kubelet[2149]: E0317 17:35:34.000888 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:34.001955 kubelet[2149]: E0317 17:35:34.001934 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:34.003170 kubelet[2149]: E0317 17:35:34.003146 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:35.008186 kubelet[2149]: E0317 17:35:35.006103 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:35.008186 kubelet[2149]: E0317 17:35:35.006471 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:35.090810 kubelet[2149]: E0317 17:35:35.090771 2149 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 17:35:35.163651 kubelet[2149]: I0317 17:35:35.163607 2149 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 17:35:35.163651 kubelet[2149]: E0317 17:35:35.163644 2149 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 17 17:35:35.173227 kubelet[2149]: E0317 17:35:35.173195 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:35:35.273933 kubelet[2149]: E0317 17:35:35.273408 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:35:35.373536 kubelet[2149]: E0317 17:35:35.373501 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:35:35.474479 kubelet[2149]: E0317 17:35:35.474436 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:35:35.574940 kubelet[2149]: E0317 17:35:35.574897 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:35:35.675367 kubelet[2149]: E0317 17:35:35.675305 2149 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:35:35.967717 kubelet[2149]: I0317 17:35:35.967581 2149 apiserver.go:52] "Watching apiserver" Mar 17 17:35:35.979605 kubelet[2149]: I0317 17:35:35.979541 2149 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:35:36.817558 systemd[1]: Reloading requested from client PID 2431 ('systemctl') (unit session-5.scope)... Mar 17 17:35:36.817574 systemd[1]: Reloading... Mar 17 17:35:36.873224 zram_generator::config[2476]: No configuration found. Mar 17 17:35:37.012700 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:35:37.074975 systemd[1]: Reloading finished in 257 ms. Mar 17 17:35:37.111989 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:35:37.126547 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:35:37.126732 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:35:37.138429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:35:37.229184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:35:37.235136 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:35:37.272931 kubelet[2512]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:35:37.272931 kubelet[2512]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:35:37.272931 kubelet[2512]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:35:37.273369 kubelet[2512]: I0317 17:35:37.272979 2512 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:35:37.281185 kubelet[2512]: I0317 17:35:37.281139 2512 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:35:37.281185 kubelet[2512]: I0317 17:35:37.281190 2512 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:35:37.281425 kubelet[2512]: I0317 17:35:37.281399 2512 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:35:37.282723 kubelet[2512]: I0317 17:35:37.282698 2512 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:35:37.285682 kubelet[2512]: I0317 17:35:37.285461 2512 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:35:37.288757 kubelet[2512]: E0317 17:35:37.288731 2512 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:35:37.288935 kubelet[2512]: I0317 17:35:37.288922 2512 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:35:37.291325 kubelet[2512]: I0317 17:35:37.291307 2512 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:35:37.291536 kubelet[2512]: I0317 17:35:37.291522 2512 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:35:37.291739 kubelet[2512]: I0317 17:35:37.291713 2512 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:35:37.292094 kubelet[2512]: I0317 17:35:37.291792 2512 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:35:37.292094 kubelet[2512]: I0317 17:35:37.291962 2512 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:35:37.292094 kubelet[2512]: I0317 17:35:37.291973 2512 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:35:37.292094 kubelet[2512]: I0317 17:35:37.292003 2512 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:35:37.292351 kubelet[2512]: I0317 17:35:37.292337 2512 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:35:37.292892 kubelet[2512]: I0317 17:35:37.292874 2512 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:35:37.293028 kubelet[2512]: I0317 17:35:37.293017 2512 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:35:37.293353 kubelet[2512]: I0317 17:35:37.293078 2512 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:35:37.293791 kubelet[2512]: I0317 17:35:37.293773 2512 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:35:37.296755 kubelet[2512]: I0317 17:35:37.296729 2512 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:35:37.298193 kubelet[2512]: I0317 17:35:37.297378 2512 server.go:1269] "Started kubelet" Mar 17 17:35:37.299102 kubelet[2512]: I0317 17:35:37.297694 2512 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:35:37.301874 kubelet[2512]: I0317 17:35:37.301849 2512 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:35:37.303776 kubelet[2512]: I0317 17:35:37.299000 2512 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:35:37.303776 kubelet[2512]: I0317 17:35:37.298859 2512 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:35:37.308757 kubelet[2512]: I0317 17:35:37.298310 2512 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:35:37.314235 kubelet[2512]: I0317 17:35:37.309359 2512 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:35:37.314235 kubelet[2512]: E0317 17:35:37.309531 2512 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:35:37.314235 kubelet[2512]: I0317 17:35:37.309861 2512 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:35:37.314235 kubelet[2512]: I0317 17:35:37.309985 2512 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:35:37.314235 kubelet[2512]: I0317 17:35:37.310407 2512 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:35:37.320472 kubelet[2512]: I0317 17:35:37.320436 2512 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:35:37.320559 kubelet[2512]: I0317 17:35:37.320537 2512 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:35:37.321610 kubelet[2512]: E0317 17:35:37.321581 2512 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:35:37.321874 kubelet[2512]: I0317 17:35:37.321840 2512 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:35:37.322111 kubelet[2512]: I0317 17:35:37.321978 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:35:37.323113 kubelet[2512]: I0317 17:35:37.323093 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:35:37.323485 kubelet[2512]: I0317 17:35:37.323217 2512 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:35:37.323485 kubelet[2512]: I0317 17:35:37.323251 2512 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:35:37.323485 kubelet[2512]: E0317 17:35:37.323294 2512 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:35:37.356255 kubelet[2512]: I0317 17:35:37.356162 2512 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:35:37.356255 kubelet[2512]: I0317 17:35:37.356214 2512 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:35:37.356255 kubelet[2512]: I0317 17:35:37.356235 2512 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:35:37.356402 kubelet[2512]: I0317 17:35:37.356391 2512 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:35:37.356426 kubelet[2512]: I0317 17:35:37.356402 2512 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:35:37.356426 kubelet[2512]: I0317 17:35:37.356420 2512 policy_none.go:49] "None policy: Start" Mar 17 17:35:37.359237 kubelet[2512]: I0317 17:35:37.359217 2512 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:35:37.359237 kubelet[2512]: I0317 17:35:37.359240 2512 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:35:37.359438 kubelet[2512]: I0317 17:35:37.359421 2512 state_mem.go:75] "Updated machine memory state" Mar 17 17:35:37.363750 kubelet[2512]: I0317 17:35:37.363329 2512 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:35:37.363750 kubelet[2512]: I0317 17:35:37.363488 2512 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:35:37.363750 kubelet[2512]: I0317 17:35:37.363498 2512 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:35:37.363750 kubelet[2512]: I0317 17:35:37.363697 2512 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:35:37.468593 kubelet[2512]: I0317 17:35:37.467157 2512 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:35:37.473980 kubelet[2512]: I0317 17:35:37.473953 2512 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Mar 17 17:35:37.474551 kubelet[2512]: I0317 17:35:37.474155 2512 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 17:35:37.511393 kubelet[2512]: I0317 17:35:37.511348 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:35:37.511508 kubelet[2512]: I0317 17:35:37.511411 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:35:37.511508 kubelet[2512]: I0317 17:35:37.511482 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:35:37.511554 kubelet[2512]: I0317 17:35:37.511506 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c941da516f522e3e32dcbfbb687a302-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9c941da516f522e3e32dcbfbb687a302\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:35:37.511554 kubelet[2512]: I0317 17:35:37.511528 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c941da516f522e3e32dcbfbb687a302-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9c941da516f522e3e32dcbfbb687a302\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:35:37.511554 kubelet[2512]: I0317 17:35:37.511544 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c941da516f522e3e32dcbfbb687a302-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9c941da516f522e3e32dcbfbb687a302\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:35:37.511622 kubelet[2512]: I0317 17:35:37.511570 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:35:37.511622 kubelet[2512]: I0317 17:35:37.511587 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:35:37.511622 kubelet[2512]: I0317 17:35:37.511603 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:35:37.732811 kubelet[2512]: E0317 17:35:37.732621 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:37.732811 kubelet[2512]: E0317 17:35:37.732634 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:37.732811 kubelet[2512]: E0317 17:35:37.732699 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:38.294183 kubelet[2512]: I0317 17:35:38.294122 2512 apiserver.go:52] "Watching apiserver" Mar 17 17:35:38.310090 kubelet[2512]: I0317 17:35:38.310053 2512 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:35:38.339714 kubelet[2512]: E0317 17:35:38.339677 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:38.340998 kubelet[2512]: E0317 17:35:38.340455 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:38.345398 kubelet[2512]: E0317 17:35:38.345369 2512 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 17:35:38.346336 kubelet[2512]: E0317 17:35:38.345824 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:38.366791 kubelet[2512]: I0317 17:35:38.366206 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.366191244 podStartE2EDuration="1.366191244s" podCreationTimestamp="2025-03-17 17:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:35:38.358904912 +0000 UTC m=+1.120773105" watchObservedRunningTime="2025-03-17 17:35:38.366191244 +0000 UTC m=+1.128059437" Mar 17 17:35:38.375885 kubelet[2512]: I0317 17:35:38.375830 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.37580726 podStartE2EDuration="1.37580726s" podCreationTimestamp="2025-03-17 17:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:35:38.366364552 +0000 UTC m=+1.128232745" watchObservedRunningTime="2025-03-17 17:35:38.37580726 +0000 UTC m=+1.137675413" Mar 17 17:35:38.571949 sudo[1610]: pam_unix(sudo:session): session closed for user root Mar 17 17:35:38.573574 sshd[1609]: Connection closed by 10.0.0.1 port 48318 Mar 17 17:35:38.574084 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:38.577652 systemd[1]: sshd@4-10.0.0.106:22-10.0.0.1:48318.service: Deactivated successfully. Mar 17 17:35:38.580511 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:35:38.580796 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:35:38.581002 systemd[1]: session-5.scope: Consumed 6.278s CPU time, 157.6M memory peak, 0B memory swap peak. Mar 17 17:35:38.581986 systemd-logind[1453]: Removed session 5. Mar 17 17:35:39.341810 kubelet[2512]: E0317 17:35:39.341706 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:41.334598 kubelet[2512]: E0317 17:35:41.334568 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:41.903579 kubelet[2512]: E0317 17:35:41.903535 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:42.487638 kubelet[2512]: E0317 17:35:42.487603 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:43.413574 kubelet[2512]: I0317 17:35:43.413545 2512 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:35:43.414097 containerd[1473]: time="2025-03-17T17:35:43.414061083Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:35:43.414522 kubelet[2512]: I0317 17:35:43.414501 2512 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:35:44.272076 kubelet[2512]: I0317 17:35:44.271916 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.271899748 podStartE2EDuration="7.271899748s" podCreationTimestamp="2025-03-17 17:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:35:38.376047732 +0000 UTC m=+1.137916045" watchObservedRunningTime="2025-03-17 17:35:44.271899748 +0000 UTC m=+7.033767901" Mar 17 17:35:44.282220 systemd[1]: Created slice kubepods-besteffort-podb8b4289e_17fb_4eb1_b778_a504d8ec496b.slice - libcontainer container kubepods-besteffort-podb8b4289e_17fb_4eb1_b778_a504d8ec496b.slice. Mar 17 17:35:44.295103 systemd[1]: Created slice kubepods-burstable-pod5688443c_7076_49d6_9542_08a69dc408c4.slice - libcontainer container kubepods-burstable-pod5688443c_7076_49d6_9542_08a69dc408c4.slice. Mar 17 17:35:44.351213 kubelet[2512]: I0317 17:35:44.351141 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/5688443c-7076-49d6-9542-08a69dc408c4-cni-plugin\") pod \"kube-flannel-ds-5zkw5\" (UID: \"5688443c-7076-49d6-9542-08a69dc408c4\") " pod="kube-flannel/kube-flannel-ds-5zkw5" Mar 17 17:35:44.351213 kubelet[2512]: I0317 17:35:44.351198 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8b4289e-17fb-4eb1-b778-a504d8ec496b-xtables-lock\") pod \"kube-proxy-52lpm\" (UID: \"b8b4289e-17fb-4eb1-b778-a504d8ec496b\") " pod="kube-system/kube-proxy-52lpm" Mar 17 17:35:44.351428 kubelet[2512]: I0317 17:35:44.351284 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5688443c-7076-49d6-9542-08a69dc408c4-run\") pod \"kube-flannel-ds-5zkw5\" (UID: \"5688443c-7076-49d6-9542-08a69dc408c4\") " pod="kube-flannel/kube-flannel-ds-5zkw5" Mar 17 17:35:44.351428 kubelet[2512]: I0317 17:35:44.351321 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/5688443c-7076-49d6-9542-08a69dc408c4-cni\") pod \"kube-flannel-ds-5zkw5\" (UID: \"5688443c-7076-49d6-9542-08a69dc408c4\") " pod="kube-flannel/kube-flannel-ds-5zkw5" Mar 17 17:35:44.351428 kubelet[2512]: I0317 17:35:44.351337 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf86m\" (UniqueName: \"kubernetes.io/projected/5688443c-7076-49d6-9542-08a69dc408c4-kube-api-access-cf86m\") pod \"kube-flannel-ds-5zkw5\" (UID: \"5688443c-7076-49d6-9542-08a69dc408c4\") " pod="kube-flannel/kube-flannel-ds-5zkw5" Mar 17 17:35:44.351428 kubelet[2512]: I0317 17:35:44.351357 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8b4289e-17fb-4eb1-b778-a504d8ec496b-kube-proxy\") pod \"kube-proxy-52lpm\" (UID: \"b8b4289e-17fb-4eb1-b778-a504d8ec496b\") " pod="kube-system/kube-proxy-52lpm" Mar 17 17:35:44.351428 kubelet[2512]: I0317 17:35:44.351373 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8b4289e-17fb-4eb1-b778-a504d8ec496b-lib-modules\") pod \"kube-proxy-52lpm\" (UID: \"b8b4289e-17fb-4eb1-b778-a504d8ec496b\") " pod="kube-system/kube-proxy-52lpm" Mar 17 17:35:44.351638 kubelet[2512]: I0317 17:35:44.351387 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/5688443c-7076-49d6-9542-08a69dc408c4-flannel-cfg\") pod \"kube-flannel-ds-5zkw5\" (UID: \"5688443c-7076-49d6-9542-08a69dc408c4\") " pod="kube-flannel/kube-flannel-ds-5zkw5" Mar 17 17:35:44.351638 kubelet[2512]: I0317 17:35:44.351406 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfglf\" (UniqueName: \"kubernetes.io/projected/b8b4289e-17fb-4eb1-b778-a504d8ec496b-kube-api-access-xfglf\") pod \"kube-proxy-52lpm\" (UID: \"b8b4289e-17fb-4eb1-b778-a504d8ec496b\") " pod="kube-system/kube-proxy-52lpm" Mar 17 17:35:44.351638 kubelet[2512]: I0317 17:35:44.351422 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5688443c-7076-49d6-9542-08a69dc408c4-xtables-lock\") pod \"kube-flannel-ds-5zkw5\" (UID: \"5688443c-7076-49d6-9542-08a69dc408c4\") " pod="kube-flannel/kube-flannel-ds-5zkw5" Mar 17 17:35:44.460309 kubelet[2512]: E0317 17:35:44.460227 2512 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 17:35:44.460584 kubelet[2512]: E0317 17:35:44.460430 2512 projected.go:194] Error preparing data for projected volume kube-api-access-xfglf for pod kube-system/kube-proxy-52lpm: configmap "kube-root-ca.crt" not found Mar 17 17:35:44.460584 kubelet[2512]: E0317 17:35:44.460505 2512 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b8b4289e-17fb-4eb1-b778-a504d8ec496b-kube-api-access-xfglf podName:b8b4289e-17fb-4eb1-b778-a504d8ec496b nodeName:}" failed. No retries permitted until 2025-03-17 17:35:44.960471876 +0000 UTC m=+7.722340029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xfglf" (UniqueName: "kubernetes.io/projected/b8b4289e-17fb-4eb1-b778-a504d8ec496b-kube-api-access-xfglf") pod "kube-proxy-52lpm" (UID: "b8b4289e-17fb-4eb1-b778-a504d8ec496b") : configmap "kube-root-ca.crt" not found Mar 17 17:35:44.599228 kubelet[2512]: E0317 17:35:44.598881 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:44.599647 containerd[1473]: time="2025-03-17T17:35:44.599598042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-5zkw5,Uid:5688443c-7076-49d6-9542-08a69dc408c4,Namespace:kube-flannel,Attempt:0,}" Mar 17 17:35:44.619327 containerd[1473]: time="2025-03-17T17:35:44.619223840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:35:44.619623 containerd[1473]: time="2025-03-17T17:35:44.619589508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:35:44.619623 containerd[1473]: time="2025-03-17T17:35:44.619606507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:35:44.619712 containerd[1473]: time="2025-03-17T17:35:44.619683224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:35:44.645324 systemd[1]: Started cri-containerd-28d2dc6a7628bc823504b52b629bc6efca896b77dc0959cadd618b39b4a8147a.scope - libcontainer container 28d2dc6a7628bc823504b52b629bc6efca896b77dc0959cadd618b39b4a8147a. Mar 17 17:35:44.669410 containerd[1473]: time="2025-03-17T17:35:44.669374298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-5zkw5,Uid:5688443c-7076-49d6-9542-08a69dc408c4,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"28d2dc6a7628bc823504b52b629bc6efca896b77dc0959cadd618b39b4a8147a\"" Mar 17 17:35:44.669974 kubelet[2512]: E0317 17:35:44.669950 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:44.670897 containerd[1473]: time="2025-03-17T17:35:44.670865326Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Mar 17 17:35:45.192670 kubelet[2512]: E0317 17:35:45.192621 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:45.193225 containerd[1473]: time="2025-03-17T17:35:45.193090380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-52lpm,Uid:b8b4289e-17fb-4eb1-b778-a504d8ec496b,Namespace:kube-system,Attempt:0,}" Mar 17 17:35:45.210365 containerd[1473]: time="2025-03-17T17:35:45.210285175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:35:45.210540 containerd[1473]: time="2025-03-17T17:35:45.210337813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:35:45.210648 containerd[1473]: time="2025-03-17T17:35:45.210532286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:35:45.211122 containerd[1473]: time="2025-03-17T17:35:45.211050229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:35:45.232420 systemd[1]: Started cri-containerd-b7a1587ccbc943c036b6bf1713c0487d11911fc3973375d447f54421e47f7fd0.scope - libcontainer container b7a1587ccbc943c036b6bf1713c0487d11911fc3973375d447f54421e47f7fd0. Mar 17 17:35:45.251933 containerd[1473]: time="2025-03-17T17:35:45.251898886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-52lpm,Uid:b8b4289e-17fb-4eb1-b778-a504d8ec496b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7a1587ccbc943c036b6bf1713c0487d11911fc3973375d447f54421e47f7fd0\"" Mar 17 17:35:45.252533 kubelet[2512]: E0317 17:35:45.252514 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:45.254596 containerd[1473]: time="2025-03-17T17:35:45.254535280Z" level=info msg="CreateContainer within sandbox \"b7a1587ccbc943c036b6bf1713c0487d11911fc3973375d447f54421e47f7fd0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:35:45.272232 containerd[1473]: time="2025-03-17T17:35:45.272170300Z" level=info msg="CreateContainer within sandbox \"b7a1587ccbc943c036b6bf1713c0487d11911fc3973375d447f54421e47f7fd0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e1ed307839a4347482a98525e558d5afaa87c5c5aad6dedd5f1040742c31923b\"" Mar 17 17:35:45.272946 containerd[1473]: time="2025-03-17T17:35:45.272722762Z" level=info msg="StartContainer for \"e1ed307839a4347482a98525e558d5afaa87c5c5aad6dedd5f1040742c31923b\"" Mar 17 17:35:45.300337 systemd[1]: Started cri-containerd-e1ed307839a4347482a98525e558d5afaa87c5c5aad6dedd5f1040742c31923b.scope - libcontainer container e1ed307839a4347482a98525e558d5afaa87c5c5aad6dedd5f1040742c31923b. Mar 17 17:35:45.329414 containerd[1473]: time="2025-03-17T17:35:45.329380139Z" level=info msg="StartContainer for \"e1ed307839a4347482a98525e558d5afaa87c5c5aad6dedd5f1040742c31923b\" returns successfully" Mar 17 17:35:45.353649 kubelet[2512]: E0317 17:35:45.353610 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:45.966369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535334454.mount: Deactivated successfully. Mar 17 17:35:45.996807 containerd[1473]: time="2025-03-17T17:35:45.996268734Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:45.997762 containerd[1473]: time="2025-03-17T17:35:45.997713647Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673531" Mar 17 17:35:45.998627 containerd[1473]: time="2025-03-17T17:35:45.998600658Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:46.000792 containerd[1473]: time="2025-03-17T17:35:46.000762427Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:46.002616 containerd[1473]: time="2025-03-17T17:35:46.002565050Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.331662965s" Mar 17 17:35:46.002616 containerd[1473]: time="2025-03-17T17:35:46.002607209Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Mar 17 17:35:46.004696 containerd[1473]: time="2025-03-17T17:35:46.004575027Z" level=info msg="CreateContainer within sandbox \"28d2dc6a7628bc823504b52b629bc6efca896b77dc0959cadd618b39b4a8147a\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 17 17:35:46.016224 containerd[1473]: time="2025-03-17T17:35:46.016066390Z" level=info msg="CreateContainer within sandbox \"28d2dc6a7628bc823504b52b629bc6efca896b77dc0959cadd618b39b4a8147a\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"7cd39b349b920d74f4ac9d1b5adf1d6e21a0a2aa96ca45cf8c166e1e860d9dfb\"" Mar 17 17:35:46.016934 containerd[1473]: time="2025-03-17T17:35:46.016912923Z" level=info msg="StartContainer for \"7cd39b349b920d74f4ac9d1b5adf1d6e21a0a2aa96ca45cf8c166e1e860d9dfb\"" Mar 17 17:35:46.044391 systemd[1]: Started cri-containerd-7cd39b349b920d74f4ac9d1b5adf1d6e21a0a2aa96ca45cf8c166e1e860d9dfb.scope - libcontainer container 7cd39b349b920d74f4ac9d1b5adf1d6e21a0a2aa96ca45cf8c166e1e860d9dfb. Mar 17 17:35:46.069238 containerd[1473]: time="2025-03-17T17:35:46.067524188Z" level=info msg="StartContainer for \"7cd39b349b920d74f4ac9d1b5adf1d6e21a0a2aa96ca45cf8c166e1e860d9dfb\" returns successfully" Mar 17 17:35:46.075378 systemd[1]: cri-containerd-7cd39b349b920d74f4ac9d1b5adf1d6e21a0a2aa96ca45cf8c166e1e860d9dfb.scope: Deactivated successfully. Mar 17 17:35:46.113182 containerd[1473]: time="2025-03-17T17:35:46.113112849Z" level=info msg="shim disconnected" id=7cd39b349b920d74f4ac9d1b5adf1d6e21a0a2aa96ca45cf8c166e1e860d9dfb namespace=k8s.io Mar 17 17:35:46.113370 containerd[1473]: time="2025-03-17T17:35:46.113211126Z" level=warning msg="cleaning up after shim disconnected" id=7cd39b349b920d74f4ac9d1b5adf1d6e21a0a2aa96ca45cf8c166e1e860d9dfb namespace=k8s.io Mar 17 17:35:46.113370 containerd[1473]: time="2025-03-17T17:35:46.113220406Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:35:46.358198 kubelet[2512]: E0317 17:35:46.358145 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:46.359844 containerd[1473]: time="2025-03-17T17:35:46.359797651Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Mar 17 17:35:46.369272 kubelet[2512]: I0317 17:35:46.369162 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-52lpm" podStartSLOduration=2.36914684 podStartE2EDuration="2.36914684s" podCreationTimestamp="2025-03-17 17:35:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:35:45.362101263 +0000 UTC m=+8.123969456" watchObservedRunningTime="2025-03-17 17:35:46.36914684 +0000 UTC m=+9.131014993" Mar 17 17:35:47.481921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1256053892.mount: Deactivated successfully. Mar 17 17:35:47.963596 containerd[1473]: time="2025-03-17T17:35:47.963541638Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:47.965484 containerd[1473]: time="2025-03-17T17:35:47.965410103Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Mar 17 17:35:47.966568 containerd[1473]: time="2025-03-17T17:35:47.966530470Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:47.970286 containerd[1473]: time="2025-03-17T17:35:47.970243920Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:47.971621 containerd[1473]: time="2025-03-17T17:35:47.971581201Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.611728671s" Mar 17 17:35:47.971668 containerd[1473]: time="2025-03-17T17:35:47.971628959Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Mar 17 17:35:47.973758 containerd[1473]: time="2025-03-17T17:35:47.973728457Z" level=info msg="CreateContainer within sandbox \"28d2dc6a7628bc823504b52b629bc6efca896b77dc0959cadd618b39b4a8147a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:35:47.988147 containerd[1473]: time="2025-03-17T17:35:47.988094794Z" level=info msg="CreateContainer within sandbox \"28d2dc6a7628bc823504b52b629bc6efca896b77dc0959cadd618b39b4a8147a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4fb1646ff46c8eb4598a8055cf94572be6a67d88597fb6b415e9bf54728b84c6\"" Mar 17 17:35:47.988843 containerd[1473]: time="2025-03-17T17:35:47.988795173Z" level=info msg="StartContainer for \"4fb1646ff46c8eb4598a8055cf94572be6a67d88597fb6b415e9bf54728b84c6\"" Mar 17 17:35:48.017352 systemd[1]: Started cri-containerd-4fb1646ff46c8eb4598a8055cf94572be6a67d88597fb6b415e9bf54728b84c6.scope - libcontainer container 4fb1646ff46c8eb4598a8055cf94572be6a67d88597fb6b415e9bf54728b84c6. Mar 17 17:35:48.051127 systemd[1]: cri-containerd-4fb1646ff46c8eb4598a8055cf94572be6a67d88597fb6b415e9bf54728b84c6.scope: Deactivated successfully. Mar 17 17:35:48.056496 containerd[1473]: time="2025-03-17T17:35:48.056455944Z" level=info msg="StartContainer for \"4fb1646ff46c8eb4598a8055cf94572be6a67d88597fb6b415e9bf54728b84c6\" returns successfully" Mar 17 17:35:48.119458 kubelet[2512]: I0317 17:35:48.119424 2512 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 17:35:48.166907 containerd[1473]: time="2025-03-17T17:35:48.166646465Z" level=info msg="shim disconnected" id=4fb1646ff46c8eb4598a8055cf94572be6a67d88597fb6b415e9bf54728b84c6 namespace=k8s.io Mar 17 17:35:48.166907 containerd[1473]: time="2025-03-17T17:35:48.166736342Z" level=warning msg="cleaning up after shim disconnected" id=4fb1646ff46c8eb4598a8055cf94572be6a67d88597fb6b415e9bf54728b84c6 namespace=k8s.io Mar 17 17:35:48.166907 containerd[1473]: time="2025-03-17T17:35:48.166746062Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:35:48.175158 systemd[1]: Created slice kubepods-burstable-podf52e9b4c_5ac1_4aac_94d8_d479ae3c7443.slice - libcontainer container kubepods-burstable-podf52e9b4c_5ac1_4aac_94d8_d479ae3c7443.slice. Mar 17 17:35:48.176921 kubelet[2512]: I0317 17:35:48.175958 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f52e9b4c-5ac1-4aac-94d8-d479ae3c7443-config-volume\") pod \"coredns-6f6b679f8f-rn7qm\" (UID: \"f52e9b4c-5ac1-4aac-94d8-d479ae3c7443\") " pod="kube-system/coredns-6f6b679f8f-rn7qm" Mar 17 17:35:48.176921 kubelet[2512]: I0317 17:35:48.176000 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm422\" (UniqueName: \"kubernetes.io/projected/f52e9b4c-5ac1-4aac-94d8-d479ae3c7443-kube-api-access-qm422\") pod \"coredns-6f6b679f8f-rn7qm\" (UID: \"f52e9b4c-5ac1-4aac-94d8-d479ae3c7443\") " pod="kube-system/coredns-6f6b679f8f-rn7qm" Mar 17 17:35:48.176921 kubelet[2512]: I0317 17:35:48.176020 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5a5497a-dc33-4335-bee5-d04489f63a8b-config-volume\") pod \"coredns-6f6b679f8f-xfrs2\" (UID: \"f5a5497a-dc33-4335-bee5-d04489f63a8b\") " pod="kube-system/coredns-6f6b679f8f-xfrs2" Mar 17 17:35:48.176921 kubelet[2512]: I0317 17:35:48.176062 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jw46\" (UniqueName: \"kubernetes.io/projected/f5a5497a-dc33-4335-bee5-d04489f63a8b-kube-api-access-5jw46\") pod \"coredns-6f6b679f8f-xfrs2\" (UID: \"f5a5497a-dc33-4335-bee5-d04489f63a8b\") " pod="kube-system/coredns-6f6b679f8f-xfrs2" Mar 17 17:35:48.183772 systemd[1]: Created slice kubepods-burstable-podf5a5497a_dc33_4335_bee5_d04489f63a8b.slice - libcontainer container kubepods-burstable-podf5a5497a_dc33_4335_bee5_d04489f63a8b.slice. Mar 17 17:35:48.361700 kubelet[2512]: E0317 17:35:48.361554 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:48.364716 containerd[1473]: time="2025-03-17T17:35:48.364425459Z" level=info msg="CreateContainer within sandbox \"28d2dc6a7628bc823504b52b629bc6efca896b77dc0959cadd618b39b4a8147a\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 17 17:35:48.380697 containerd[1473]: time="2025-03-17T17:35:48.380639806Z" level=info msg="CreateContainer within sandbox \"28d2dc6a7628bc823504b52b629bc6efca896b77dc0959cadd618b39b4a8147a\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"49ae4f05cf9d4e45f01973e5a39c627741a40a52c0558dfb02766d4598aa0d5b\"" Mar 17 17:35:48.382374 containerd[1473]: time="2025-03-17T17:35:48.382340798Z" level=info msg="StartContainer for \"49ae4f05cf9d4e45f01973e5a39c627741a40a52c0558dfb02766d4598aa0d5b\"" Mar 17 17:35:48.407372 systemd[1]: Started cri-containerd-49ae4f05cf9d4e45f01973e5a39c627741a40a52c0558dfb02766d4598aa0d5b.scope - libcontainer container 49ae4f05cf9d4e45f01973e5a39c627741a40a52c0558dfb02766d4598aa0d5b. Mar 17 17:35:48.439585 containerd[1473]: time="2025-03-17T17:35:48.439531440Z" level=info msg="StartContainer for \"49ae4f05cf9d4e45f01973e5a39c627741a40a52c0558dfb02766d4598aa0d5b\" returns successfully" Mar 17 17:35:48.483576 kubelet[2512]: E0317 17:35:48.483301 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:48.483854 containerd[1473]: time="2025-03-17T17:35:48.483817123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rn7qm,Uid:f52e9b4c-5ac1-4aac-94d8-d479ae3c7443,Namespace:kube-system,Attempt:0,}" Mar 17 17:35:48.485951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4fb1646ff46c8eb4598a8055cf94572be6a67d88597fb6b415e9bf54728b84c6-rootfs.mount: Deactivated successfully. Mar 17 17:35:48.487211 kubelet[2512]: E0317 17:35:48.487184 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:48.490017 containerd[1473]: time="2025-03-17T17:35:48.489980791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfrs2,Uid:f5a5497a-dc33-4335-bee5-d04489f63a8b,Namespace:kube-system,Attempt:0,}" Mar 17 17:35:48.557951 containerd[1473]: time="2025-03-17T17:35:48.557895013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfrs2,Uid:f5a5497a-dc33-4335-bee5-d04489f63a8b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d220c7c4734840f46111cda9b33dbf52d35ce2d63aaffb0f99a8edae0ae128c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:35:48.558392 kubelet[2512]: E0317 17:35:48.558358 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d220c7c4734840f46111cda9b33dbf52d35ce2d63aaffb0f99a8edae0ae128c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:35:48.558834 kubelet[2512]: E0317 17:35:48.558593 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d220c7c4734840f46111cda9b33dbf52d35ce2d63aaffb0f99a8edae0ae128c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-xfrs2" Mar 17 17:35:48.558834 kubelet[2512]: E0317 17:35:48.558618 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d220c7c4734840f46111cda9b33dbf52d35ce2d63aaffb0f99a8edae0ae128c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-xfrs2" Mar 17 17:35:48.558834 kubelet[2512]: E0317 17:35:48.558682 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xfrs2_kube-system(f5a5497a-dc33-4335-bee5-d04489f63a8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xfrs2_kube-system(f5a5497a-dc33-4335-bee5-d04489f63a8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d220c7c4734840f46111cda9b33dbf52d35ce2d63aaffb0f99a8edae0ae128c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-xfrs2" podUID="f5a5497a-dc33-4335-bee5-d04489f63a8b" Mar 17 17:35:48.559667 containerd[1473]: time="2025-03-17T17:35:48.559154818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rn7qm,Uid:f52e9b4c-5ac1-4aac-94d8-d479ae3c7443,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ed1bc5d3f0827bb6831e56baa7d408e8e3f0e5065b6583fa9aa7383041094a56\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:35:48.559739 kubelet[2512]: E0317 17:35:48.559476 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed1bc5d3f0827bb6831e56baa7d408e8e3f0e5065b6583fa9aa7383041094a56\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:35:48.559739 kubelet[2512]: E0317 17:35:48.559523 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed1bc5d3f0827bb6831e56baa7d408e8e3f0e5065b6583fa9aa7383041094a56\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-rn7qm" Mar 17 17:35:48.559739 kubelet[2512]: E0317 17:35:48.559545 2512 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed1bc5d3f0827bb6831e56baa7d408e8e3f0e5065b6583fa9aa7383041094a56\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-rn7qm" Mar 17 17:35:48.559739 kubelet[2512]: E0317 17:35:48.559577 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-rn7qm_kube-system(f52e9b4c-5ac1-4aac-94d8-d479ae3c7443)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-rn7qm_kube-system(f52e9b4c-5ac1-4aac-94d8-d479ae3c7443)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed1bc5d3f0827bb6831e56baa7d408e8e3f0e5065b6583fa9aa7383041094a56\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-rn7qm" podUID="f52e9b4c-5ac1-4aac-94d8-d479ae3c7443" Mar 17 17:35:49.365502 kubelet[2512]: E0317 17:35:49.365373 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:49.483938 systemd[1]: run-netns-cni\x2dc088dae0\x2db667\x2dd986\x2d18d0\x2d21f32b3bfd14.mount: Deactivated successfully. Mar 17 17:35:49.484049 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d220c7c4734840f46111cda9b33dbf52d35ce2d63aaffb0f99a8edae0ae128c-shm.mount: Deactivated successfully. Mar 17 17:35:49.484100 systemd[1]: run-netns-cni\x2d57caccff\x2d59e1\x2d684d\x2df040\x2db9605eae860f.mount: Deactivated successfully. Mar 17 17:35:49.484393 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ed1bc5d3f0827bb6831e56baa7d408e8e3f0e5065b6583fa9aa7383041094a56-shm.mount: Deactivated successfully. Mar 17 17:35:49.543759 systemd-networkd[1398]: flannel.1: Link UP Mar 17 17:35:49.543765 systemd-networkd[1398]: flannel.1: Gained carrier Mar 17 17:35:50.367008 kubelet[2512]: E0317 17:35:50.366972 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:51.202343 systemd-networkd[1398]: flannel.1: Gained IPv6LL Mar 17 17:35:51.342113 kubelet[2512]: E0317 17:35:51.341809 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:51.350544 kubelet[2512]: I0317 17:35:51.350255 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-5zkw5" podStartSLOduration=4.048269531 podStartE2EDuration="7.350237007s" podCreationTimestamp="2025-03-17 17:35:44 +0000 UTC" firstStartedPulling="2025-03-17 17:35:44.670486939 +0000 UTC m=+7.432355132" lastFinishedPulling="2025-03-17 17:35:47.972454455 +0000 UTC m=+10.734322608" observedRunningTime="2025-03-17 17:35:49.37909013 +0000 UTC m=+12.140958403" watchObservedRunningTime="2025-03-17 17:35:51.350237007 +0000 UTC m=+14.112105200" Mar 17 17:35:51.912127 kubelet[2512]: E0317 17:35:51.911978 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:52.503976 kubelet[2512]: E0317 17:35:52.503928 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:53.128829 update_engine[1459]: I20250317 17:35:53.128294 1459 update_attempter.cc:509] Updating boot flags... Mar 17 17:35:53.157220 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3158) Mar 17 17:35:53.196408 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3161) Mar 17 17:36:01.325038 kubelet[2512]: E0317 17:36:01.324547 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:36:01.326158 containerd[1473]: time="2025-03-17T17:36:01.325979931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rn7qm,Uid:f52e9b4c-5ac1-4aac-94d8-d479ae3c7443,Namespace:kube-system,Attempt:0,}" Mar 17 17:36:01.329263 kubelet[2512]: E0317 17:36:01.329188 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:36:01.330088 containerd[1473]: time="2025-03-17T17:36:01.330003431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfrs2,Uid:f5a5497a-dc33-4335-bee5-d04489f63a8b,Namespace:kube-system,Attempt:0,}" Mar 17 17:36:01.371037 systemd-networkd[1398]: cni0: Link UP Mar 17 17:36:01.371985 systemd-networkd[1398]: cni0: Gained carrier Mar 17 17:36:01.372424 systemd-networkd[1398]: cni0: Lost carrier Mar 17 17:36:01.382781 systemd-networkd[1398]: veth6761cfb9: Link UP Mar 17 17:36:01.383477 systemd-networkd[1398]: vethb2dcdd08: Link UP Mar 17 17:36:01.385918 kernel: cni0: port 1(vethb2dcdd08) entered blocking state Mar 17 17:36:01.385969 kernel: cni0: port 1(vethb2dcdd08) entered disabled state Mar 17 17:36:01.386009 kernel: vethb2dcdd08: entered allmulticast mode Mar 17 17:36:01.386038 kernel: vethb2dcdd08: entered promiscuous mode Mar 17 17:36:01.386055 kernel: cni0: port 1(vethb2dcdd08) entered blocking state Mar 17 17:36:01.386720 kernel: cni0: port 1(vethb2dcdd08) entered forwarding state Mar 17 17:36:01.388920 kernel: cni0: port 1(vethb2dcdd08) entered disabled state Mar 17 17:36:01.388971 kernel: cni0: port 2(veth6761cfb9) entered blocking state Mar 17 17:36:01.390216 kernel: cni0: port 2(veth6761cfb9) entered disabled state Mar 17 17:36:01.391615 kernel: veth6761cfb9: entered allmulticast mode Mar 17 17:36:01.392324 kernel: veth6761cfb9: entered promiscuous mode Mar 17 17:36:01.397303 kernel: cni0: port 2(veth6761cfb9) entered blocking state Mar 17 17:36:01.397363 kernel: cni0: port 2(veth6761cfb9) entered forwarding state Mar 17 17:36:01.397465 systemd-networkd[1398]: veth6761cfb9: Gained carrier Mar 17 17:36:01.397655 systemd-networkd[1398]: cni0: Gained carrier Mar 17 17:36:01.399262 kernel: cni0: port 1(vethb2dcdd08) entered blocking state Mar 17 17:36:01.399312 kernel: cni0: port 1(vethb2dcdd08) entered forwarding state Mar 17 17:36:01.399343 systemd-networkd[1398]: vethb2dcdd08: Gained carrier Mar 17 17:36:01.402102 containerd[1473]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001c938), "name":"cbr0", "type":"bridge"} Mar 17 17:36:01.402102 containerd[1473]: delegateAdd: netconf sent to delegate plugin: Mar 17 17:36:01.402803 containerd[1473]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Mar 17 17:36:01.402803 containerd[1473]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001a938), "name":"cbr0", "type":"bridge"} Mar 17 17:36:01.402803 containerd[1473]: delegateAdd: netconf sent to delegate plugin: Mar 17 17:36:01.417921 containerd[1473]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-03-17T17:36:01.417793130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:36:01.418047 containerd[1473]: time="2025-03-17T17:36:01.417905248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:36:01.418047 containerd[1473]: time="2025-03-17T17:36:01.417922248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:01.418788 containerd[1473]: time="2025-03-17T17:36:01.418713556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:36:01.418788 containerd[1473]: time="2025-03-17T17:36:01.418767555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:36:01.418866 containerd[1473]: time="2025-03-17T17:36:01.418779035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:01.418866 containerd[1473]: time="2025-03-17T17:36:01.418851914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:01.419133 containerd[1473]: time="2025-03-17T17:36:01.419066911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:01.446335 systemd[1]: Started cri-containerd-2f947e3016b016f48439122597949273ba779ef8e645e8d935aa606524000b76.scope - libcontainer container 2f947e3016b016f48439122597949273ba779ef8e645e8d935aa606524000b76. Mar 17 17:36:01.448899 systemd[1]: Started cri-containerd-76cd217773db26f802e6f95e0c18a25b07beeec268dcc5ad40c532ca33c82de2.scope - libcontainer container 76cd217773db26f802e6f95e0c18a25b07beeec268dcc5ad40c532ca33c82de2. Mar 17 17:36:01.460486 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:36:01.462001 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:36:01.475217 systemd[1]: Started sshd@5-10.0.0.106:22-10.0.0.1:39488.service - OpenSSH per-connection server daemon (10.0.0.1:39488). Mar 17 17:36:01.488514 containerd[1473]: time="2025-03-17T17:36:01.488483122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfrs2,Uid:f5a5497a-dc33-4335-bee5-d04489f63a8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f947e3016b016f48439122597949273ba779ef8e645e8d935aa606524000b76\"" Mar 17 17:36:01.488687 containerd[1473]: time="2025-03-17T17:36:01.488668639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rn7qm,Uid:f52e9b4c-5ac1-4aac-94d8-d479ae3c7443,Namespace:kube-system,Attempt:0,} returns sandbox id \"76cd217773db26f802e6f95e0c18a25b07beeec268dcc5ad40c532ca33c82de2\"" Mar 17 17:36:01.489597 kubelet[2512]: E0317 17:36:01.489575 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:36:01.491933 kubelet[2512]: E0317 17:36:01.491781 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:36:01.493511 containerd[1473]: time="2025-03-17T17:36:01.493398049Z" level=info msg="CreateContainer within sandbox \"76cd217773db26f802e6f95e0c18a25b07beeec268dcc5ad40c532ca33c82de2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:36:01.496815 containerd[1473]: time="2025-03-17T17:36:01.496777959Z" level=info msg="CreateContainer within sandbox \"2f947e3016b016f48439122597949273ba779ef8e645e8d935aa606524000b76\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:36:01.512005 containerd[1473]: time="2025-03-17T17:36:01.511970534Z" level=info msg="CreateContainer within sandbox \"2f947e3016b016f48439122597949273ba779ef8e645e8d935aa606524000b76\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b13417a81d296cb0a43b3a482eb02d8572ca1a708780d9188496a326247bbef1\"" Mar 17 17:36:01.512499 containerd[1473]: time="2025-03-17T17:36:01.512463367Z" level=info msg="StartContainer for \"b13417a81d296cb0a43b3a482eb02d8572ca1a708780d9188496a326247bbef1\"" Mar 17 17:36:01.514912 containerd[1473]: time="2025-03-17T17:36:01.514618815Z" level=info msg="CreateContainer within sandbox \"76cd217773db26f802e6f95e0c18a25b07beeec268dcc5ad40c532ca33c82de2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"31626d9491701276041fcba64e84eedb83d98e5b7bbdfffe8dac83af432867c2\"" Mar 17 17:36:01.515060 containerd[1473]: time="2025-03-17T17:36:01.515010489Z" level=info msg="StartContainer for \"31626d9491701276041fcba64e84eedb83d98e5b7bbdfffe8dac83af432867c2\"" Mar 17 17:36:01.526102 sshd[3347]: Accepted publickey for core from 10.0.0.1 port 39488 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:36:01.527748 sshd-session[3347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:36:01.532403 systemd-logind[1453]: New session 6 of user core. Mar 17 17:36:01.546391 systemd[1]: Started cri-containerd-b13417a81d296cb0a43b3a482eb02d8572ca1a708780d9188496a326247bbef1.scope - libcontainer container b13417a81d296cb0a43b3a482eb02d8572ca1a708780d9188496a326247bbef1. Mar 17 17:36:01.547453 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:36:01.550928 systemd[1]: Started cri-containerd-31626d9491701276041fcba64e84eedb83d98e5b7bbdfffe8dac83af432867c2.scope - libcontainer container 31626d9491701276041fcba64e84eedb83d98e5b7bbdfffe8dac83af432867c2. Mar 17 17:36:01.568925 containerd[1473]: time="2025-03-17T17:36:01.568888970Z" level=info msg="StartContainer for \"b13417a81d296cb0a43b3a482eb02d8572ca1a708780d9188496a326247bbef1\" returns successfully" Mar 17 17:36:01.583033 containerd[1473]: time="2025-03-17T17:36:01.582916442Z" level=info msg="StartContainer for \"31626d9491701276041fcba64e84eedb83d98e5b7bbdfffe8dac83af432867c2\" returns successfully" Mar 17 17:36:01.690509 sshd[3407]: Connection closed by 10.0.0.1 port 39488 Mar 17 17:36:01.691254 sshd-session[3347]: pam_unix(sshd:session): session closed for user core Mar 17 17:36:01.693819 systemd[1]: sshd@5-10.0.0.106:22-10.0.0.1:39488.service: Deactivated successfully. Mar 17 17:36:01.695576 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:36:01.697019 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:36:01.698078 systemd-logind[1453]: Removed session 6. Mar 17 17:36:02.390388 kubelet[2512]: E0317 17:36:02.390228 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:36:02.395817 kubelet[2512]: E0317 17:36:02.395718 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:36:02.414367 kubelet[2512]: I0317 17:36:02.414312 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xfrs2" podStartSLOduration=18.414267816 podStartE2EDuration="18.414267816s" podCreationTimestamp="2025-03-17 17:35:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:36:02.404732951 +0000 UTC m=+25.166601104" watchObservedRunningTime="2025-03-17 17:36:02.414267816 +0000 UTC m=+25.176136009" Mar 17 17:36:02.423048 kubelet[2512]: I0317 17:36:02.422916 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rn7qm" podStartSLOduration=18.422902533 podStartE2EDuration="18.422902533s" podCreationTimestamp="2025-03-17 17:35:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:36:02.414563012 +0000 UTC m=+25.176431205" watchObservedRunningTime="2025-03-17 17:36:02.422902533 +0000 UTC m=+25.184770726" Mar 17 17:36:02.530295 systemd-networkd[1398]: cni0: Gained IPv6LL Mar 17 17:36:02.850300 systemd-networkd[1398]: veth6761cfb9: Gained IPv6LL Mar 17 17:36:03.298308 systemd-networkd[1398]: vethb2dcdd08: Gained IPv6LL Mar 17 17:36:03.397248 kubelet[2512]: E0317 17:36:03.397135 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:36:03.397248 kubelet[2512]: E0317 17:36:03.397219 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:36:04.398674 kubelet[2512]: E0317 17:36:04.398645 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:36:04.399004 kubelet[2512]: E0317 17:36:04.398691 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:36:06.701721 systemd[1]: Started sshd@6-10.0.0.106:22-10.0.0.1:51008.service - OpenSSH per-connection server daemon (10.0.0.1:51008). Mar 17 17:36:06.739710 sshd[3483]: Accepted publickey for core from 10.0.0.1 port 51008 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:36:06.740779 sshd-session[3483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:36:06.744609 systemd-logind[1453]: New session 7 of user core. Mar 17 17:36:06.750310 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:36:06.857195 sshd[3485]: Connection closed by 10.0.0.1 port 51008 Mar 17 17:36:06.857621 sshd-session[3483]: pam_unix(sshd:session): session closed for user core Mar 17 17:36:06.860905 systemd[1]: sshd@6-10.0.0.106:22-10.0.0.1:51008.service: Deactivated successfully. Mar 17 17:36:06.862737 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:36:06.863319 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:36:06.864081 systemd-logind[1453]: Removed session 7. Mar 17 17:36:11.873380 systemd[1]: Started sshd@7-10.0.0.106:22-10.0.0.1:51010.service - OpenSSH per-connection server daemon (10.0.0.1:51010). Mar 17 17:36:11.917949 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 51010 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:36:11.919048 sshd-session[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:36:11.922428 systemd-logind[1453]: New session 8 of user core. Mar 17 17:36:11.934326 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:36:12.046197 sshd[3522]: Connection closed by 10.0.0.1 port 51010 Mar 17 17:36:12.046702 sshd-session[3520]: pam_unix(sshd:session): session closed for user core Mar 17 17:36:12.066656 systemd[1]: sshd@7-10.0.0.106:22-10.0.0.1:51010.service: Deactivated successfully. Mar 17 17:36:12.068456 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:36:12.069869 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:36:12.071469 systemd[1]: Started sshd@8-10.0.0.106:22-10.0.0.1:51016.service - OpenSSH per-connection server daemon (10.0.0.1:51016). Mar 17 17:36:12.072519 systemd-logind[1453]: Removed session 8. Mar 17 17:36:12.110182 sshd[3535]: Accepted publickey for core from 10.0.0.1 port 51016 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:36:12.112129 sshd-session[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:36:12.115440 systemd-logind[1453]: New session 9 of user core. Mar 17 17:36:12.125327 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:36:12.265858 sshd[3537]: Connection closed by 10.0.0.1 port 51016 Mar 17 17:36:12.266317 sshd-session[3535]: pam_unix(sshd:session): session closed for user core Mar 17 17:36:12.274959 systemd[1]: sshd@8-10.0.0.106:22-10.0.0.1:51016.service: Deactivated successfully. Mar 17 17:36:12.277890 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:36:12.278754 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:36:12.292607 systemd[1]: Started sshd@9-10.0.0.106:22-10.0.0.1:51022.service - OpenSSH per-connection server daemon (10.0.0.1:51022). Mar 17 17:36:12.293972 systemd-logind[1453]: Removed session 9. Mar 17 17:36:12.335735 sshd[3547]: Accepted publickey for core from 10.0.0.1 port 51022 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:36:12.337083 sshd-session[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:36:12.341391 systemd-logind[1453]: New session 10 of user core. Mar 17 17:36:12.353437 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:36:12.465979 sshd[3550]: Connection closed by 10.0.0.1 port 51022 Mar 17 17:36:12.466395 sshd-session[3547]: pam_unix(sshd:session): session closed for user core Mar 17 17:36:12.469557 systemd[1]: sshd@9-10.0.0.106:22-10.0.0.1:51022.service: Deactivated successfully. Mar 17 17:36:12.472456 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:36:12.473420 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:36:12.474322 systemd-logind[1453]: Removed session 10. Mar 17 17:36:17.476559 systemd[1]: Started sshd@10-10.0.0.106:22-10.0.0.1:43166.service - OpenSSH per-connection server daemon (10.0.0.1:43166). Mar 17 17:36:17.514967 sshd[3586]: Accepted publickey for core from 10.0.0.1 port 43166 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:36:17.516125 sshd-session[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:36:17.520296 systemd-logind[1453]: New session 11 of user core. Mar 17 17:36:17.533352 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:36:17.639859 sshd[3588]: Connection closed by 10.0.0.1 port 43166 Mar 17 17:36:17.640300 sshd-session[3586]: pam_unix(sshd:session): session closed for user core Mar 17 17:36:17.650627 systemd[1]: sshd@10-10.0.0.106:22-10.0.0.1:43166.service: Deactivated successfully. Mar 17 17:36:17.652908 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:36:17.654681 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:36:17.664470 systemd[1]: Started sshd@11-10.0.0.106:22-10.0.0.1:43182.service - OpenSSH per-connection server daemon (10.0.0.1:43182). Mar 17 17:36:17.665973 systemd-logind[1453]: Removed session 11. Mar 17 17:36:17.699836 sshd[3600]: Accepted publickey for core from 10.0.0.1 port 43182 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:36:17.701129 sshd-session[3600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:36:17.705728 systemd-logind[1453]: New session 12 of user core. Mar 17 17:36:17.713357 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:36:17.891375 sshd[3602]: Connection closed by 10.0.0.1 port 43182 Mar 17 17:36:17.892057 sshd-session[3600]: pam_unix(sshd:session): session closed for user core Mar 17 17:36:17.903901 systemd[1]: sshd@11-10.0.0.106:22-10.0.0.1:43182.service: Deactivated successfully. Mar 17 17:36:17.905615 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:36:17.907008 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:36:17.908442 systemd[1]: Started sshd@12-10.0.0.106:22-10.0.0.1:43194.service - OpenSSH per-connection server daemon (10.0.0.1:43194). Mar 17 17:36:17.909178 systemd-logind[1453]: Removed session 12. Mar 17 17:36:17.962742 sshd[3612]: Accepted publickey for core from 10.0.0.1 port 43194 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:36:17.963984 sshd-session[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:36:17.968009 systemd-logind[1453]: New session 13 of user core. Mar 17 17:36:17.978360 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:36:19.105402 sshd[3614]: Connection closed by 10.0.0.1 port 43194 Mar 17 17:36:19.105965 sshd-session[3612]: pam_unix(sshd:session): session closed for user core Mar 17 17:36:19.113696 systemd[1]: sshd@12-10.0.0.106:22-10.0.0.1:43194.service: Deactivated successfully. Mar 17 17:36:19.116588 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:36:19.120397 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:36:19.129969 systemd[1]: Started sshd@13-10.0.0.106:22-10.0.0.1:43202.service - OpenSSH per-connection server daemon (10.0.0.1:43202). Mar 17 17:36:19.131815 systemd-logind[1453]: Removed session 13. Mar 17 17:36:19.164626 sshd[3631]: Accepted publickey for core from 10.0.0.1 port 43202 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:36:19.165799 sshd-session[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:36:19.169322 systemd-logind[1453]: New session 14 of user core. Mar 17 17:36:19.179349 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:36:19.414783 sshd[3633]: Connection closed by 10.0.0.1 port 43202 Mar 17 17:36:19.414929 sshd-session[3631]: pam_unix(sshd:session): session closed for user core Mar 17 17:36:19.423162 systemd[1]: sshd@13-10.0.0.106:22-10.0.0.1:43202.service: Deactivated successfully. Mar 17 17:36:19.424671 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:36:19.426550 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:36:19.441131 systemd[1]: Started sshd@14-10.0.0.106:22-10.0.0.1:43208.service - OpenSSH per-connection server daemon (10.0.0.1:43208). Mar 17 17:36:19.442870 systemd-logind[1453]: Removed session 14. Mar 17 17:36:19.477656 sshd[3644]: Accepted publickey for core from 10.0.0.1 port 43208 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:36:19.479108 sshd-session[3644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:36:19.483238 systemd-logind[1453]: New session 15 of user core. Mar 17 17:36:19.493373 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:36:19.603720 sshd[3646]: Connection closed by 10.0.0.1 port 43208 Mar 17 17:36:19.604042 sshd-session[3644]: pam_unix(sshd:session): session closed for user core Mar 17 17:36:19.607436 systemd[1]: sshd@14-10.0.0.106:22-10.0.0.1:43208.service: Deactivated successfully. Mar 17 17:36:19.609060 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:36:19.609683 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:36:19.610536 systemd-logind[1453]: Removed session 15. Mar 17 17:36:24.615702 systemd[1]: Started sshd@15-10.0.0.106:22-10.0.0.1:51794.service - OpenSSH per-connection server daemon (10.0.0.1:51794). Mar 17 17:36:24.661039 sshd[3682]: Accepted publickey for core from 10.0.0.1 port 51794 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:36:24.662403 sshd-session[3682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:36:24.666989 systemd-logind[1453]: New session 16 of user core. Mar 17 17:36:24.678403 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:36:24.805136 sshd[3690]: Connection closed by 10.0.0.1 port 51794 Mar 17 17:36:24.805502 sshd-session[3682]: pam_unix(sshd:session): session closed for user core Mar 17 17:36:24.808926 systemd[1]: sshd@15-10.0.0.106:22-10.0.0.1:51794.service: Deactivated successfully. Mar 17 17:36:24.810580 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:36:24.811267 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:36:24.812347 systemd-logind[1453]: Removed session 16. Mar 17 17:36:29.815643 systemd[1]: Started sshd@16-10.0.0.106:22-10.0.0.1:51806.service - OpenSSH per-connection server daemon (10.0.0.1:51806). Mar 17 17:36:29.854226 sshd[3739]: Accepted publickey for core from 10.0.0.1 port 51806 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:36:29.855385 sshd-session[3739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:36:29.859461 systemd-logind[1453]: New session 17 of user core. Mar 17 17:36:29.866340 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:36:29.974051 sshd[3741]: Connection closed by 10.0.0.1 port 51806 Mar 17 17:36:29.974379 sshd-session[3739]: pam_unix(sshd:session): session closed for user core Mar 17 17:36:29.976963 systemd[1]: sshd@16-10.0.0.106:22-10.0.0.1:51806.service: Deactivated successfully. Mar 17 17:36:29.978691 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:36:29.981312 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:36:29.982238 systemd-logind[1453]: Removed session 17. Mar 17 17:36:34.988788 systemd[1]: Started sshd@17-10.0.0.106:22-10.0.0.1:50190.service - OpenSSH per-connection server daemon (10.0.0.1:50190). Mar 17 17:36:35.027917 sshd[3776]: Accepted publickey for core from 10.0.0.1 port 50190 ssh2: RSA SHA256:XEsN/dc1y+7MY2pZiPvPM9E3FANLWuBR2AC7g0KqjmQ Mar 17 17:36:35.029289 sshd-session[3776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:36:35.033156 systemd-logind[1453]: New session 18 of user core. Mar 17 17:36:35.040347 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:36:35.155803 sshd[3778]: Connection closed by 10.0.0.1 port 50190 Mar 17 17:36:35.156142 sshd-session[3776]: pam_unix(sshd:session): session closed for user core Mar 17 17:36:35.159266 systemd[1]: sshd@17-10.0.0.106:22-10.0.0.1:50190.service: Deactivated successfully. Mar 17 17:36:35.161069 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:36:35.161817 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:36:35.162711 systemd-logind[1453]: Removed session 18.