Mar 17 17:40:57.888391 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:40:57.888411 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Mar 17 16:11:40 -00 2025 Mar 17 17:40:57.888422 kernel: KASLR enabled Mar 17 17:40:57.888427 kernel: efi: EFI v2.7 by EDK II Mar 17 17:40:57.888433 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Mar 17 17:40:57.888439 kernel: random: crng init done Mar 17 17:40:57.888446 kernel: secureboot: Secure boot disabled Mar 17 17:40:57.888452 kernel: ACPI: Early table checksum verification disabled Mar 17 17:40:57.888458 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 17 17:40:57.888465 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:40:57.888471 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:57.888477 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:57.888483 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:57.888489 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:57.888496 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:57.888504 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:57.888510 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:57.888516 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:57.888523 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:40:57.888528 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 17 17:40:57.888535 kernel: NUMA: Failed to initialise from firmware Mar 17 17:40:57.888541 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:40:57.888547 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Mar 17 17:40:57.888553 kernel: Zone ranges: Mar 17 17:40:57.888559 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:40:57.888566 kernel: DMA32 empty Mar 17 17:40:57.888572 kernel: Normal empty Mar 17 17:40:57.888579 kernel: Movable zone start for each node Mar 17 17:40:57.888585 kernel: Early memory node ranges Mar 17 17:40:57.888591 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 17 17:40:57.888597 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 17 17:40:57.888618 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 17 17:40:57.888624 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 17 17:40:57.888631 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 17 17:40:57.888637 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 17 17:40:57.888643 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 17 17:40:57.888650 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 17 17:40:57.888657 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 17 17:40:57.888663 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:40:57.888670 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 17 17:40:57.888678 kernel: psci: probing for conduit method from ACPI. Mar 17 17:40:57.888685 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:40:57.888692 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:40:57.888700 kernel: psci: Trusted OS migration not required Mar 17 17:40:57.888706 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:40:57.888713 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 17:40:57.888719 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:40:57.888726 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:40:57.888732 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 17 17:40:57.888739 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:40:57.888745 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:40:57.888752 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:40:57.888758 kernel: CPU features: detected: Spectre-v4 Mar 17 17:40:57.888766 kernel: CPU features: detected: Spectre-BHB Mar 17 17:40:57.888772 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:40:57.888779 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:40:57.888785 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:40:57.888791 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:40:57.888798 kernel: alternatives: applying boot alternatives Mar 17 17:40:57.888805 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:40:57.888812 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:40:57.888818 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:40:57.888824 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:40:57.888831 kernel: Fallback order for Node 0: 0 Mar 17 17:40:57.888838 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 17 17:40:57.888856 kernel: Policy zone: DMA Mar 17 17:40:57.888864 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:40:57.888870 kernel: software IO TLB: area num 4. Mar 17 17:40:57.888876 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 17 17:40:57.888883 kernel: Memory: 2387540K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 184748K reserved, 0K cma-reserved) Mar 17 17:40:57.888890 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:40:57.888896 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:40:57.888903 kernel: rcu: RCU event tracing is enabled. Mar 17 17:40:57.888910 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:40:57.888916 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:40:57.888923 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:40:57.888931 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:40:57.888938 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:40:57.888944 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:40:57.888950 kernel: GICv3: 256 SPIs implemented Mar 17 17:40:57.888956 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:40:57.888963 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:40:57.888969 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:40:57.888976 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 17:40:57.888982 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 17:40:57.888989 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:40:57.888995 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:40:57.889003 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 17 17:40:57.889009 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 17 17:40:57.889016 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:40:57.889022 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:40:57.889028 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:40:57.889035 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:40:57.889042 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:40:57.889048 kernel: arm-pv: using stolen time PV Mar 17 17:40:57.889055 kernel: Console: colour dummy device 80x25 Mar 17 17:40:57.889061 kernel: ACPI: Core revision 20230628 Mar 17 17:40:57.889068 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:40:57.889076 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:40:57.889083 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:40:57.889089 kernel: landlock: Up and running. Mar 17 17:40:57.889095 kernel: SELinux: Initializing. Mar 17 17:40:57.889102 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:40:57.889108 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:40:57.889115 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:40:57.889122 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:40:57.889128 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:40:57.889136 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:40:57.889143 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 17:40:57.889149 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 17:40:57.889156 kernel: Remapping and enabling EFI services. Mar 17 17:40:57.889162 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:40:57.889169 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:40:57.889175 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 17:40:57.889182 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 17 17:40:57.889189 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:40:57.889196 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:40:57.889203 kernel: Detected PIPT I-cache on CPU2 Mar 17 17:40:57.889214 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 17 17:40:57.889222 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 17 17:40:57.889229 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:40:57.889236 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 17 17:40:57.889243 kernel: Detected PIPT I-cache on CPU3 Mar 17 17:40:57.889250 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 17 17:40:57.889257 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 17 17:40:57.889265 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:40:57.889272 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 17 17:40:57.889286 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:40:57.889293 kernel: SMP: Total of 4 processors activated. Mar 17 17:40:57.889300 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:40:57.889312 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:40:57.889319 kernel: CPU features: detected: Common not Private translations Mar 17 17:40:57.889326 kernel: CPU features: detected: CRC32 instructions Mar 17 17:40:57.889334 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 17 17:40:57.889341 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:40:57.889348 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:40:57.889355 kernel: CPU features: detected: Privileged Access Never Mar 17 17:40:57.889362 kernel: CPU features: detected: RAS Extension Support Mar 17 17:40:57.889369 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 17:40:57.889376 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:40:57.889383 kernel: alternatives: applying system-wide alternatives Mar 17 17:40:57.889390 kernel: devtmpfs: initialized Mar 17 17:40:57.889400 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:40:57.889407 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:40:57.889415 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:40:57.889421 kernel: SMBIOS 3.0.0 present. Mar 17 17:40:57.889428 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 17 17:40:57.889435 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:40:57.889442 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:40:57.889449 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:40:57.889456 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:40:57.889464 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:40:57.889472 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Mar 17 17:40:57.889478 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:40:57.889485 kernel: cpuidle: using governor menu Mar 17 17:40:57.889492 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:40:57.889499 kernel: ASID allocator initialised with 32768 entries Mar 17 17:40:57.889506 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:40:57.889513 kernel: Serial: AMBA PL011 UART driver Mar 17 17:40:57.889519 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:40:57.889528 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:40:57.889535 kernel: Modules: 509280 pages in range for PLT usage Mar 17 17:40:57.889542 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:40:57.889548 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:40:57.889555 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:40:57.889562 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:40:57.889569 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:40:57.889576 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:40:57.889583 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:40:57.889597 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:40:57.889604 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:40:57.889611 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:40:57.889618 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:40:57.889624 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:40:57.889632 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:40:57.889638 kernel: ACPI: Interpreter enabled Mar 17 17:40:57.889645 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:40:57.889652 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:40:57.889659 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:40:57.889667 kernel: printk: console [ttyAMA0] enabled Mar 17 17:40:57.889674 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:40:57.889819 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:40:57.889912 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:40:57.889980 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:40:57.890043 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 17:40:57.890104 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 17:40:57.890115 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 17:40:57.890122 kernel: PCI host bridge to bus 0000:00 Mar 17 17:40:57.890190 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 17:40:57.890250 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:40:57.890319 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 17:40:57.890378 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:40:57.890459 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 17:40:57.890538 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:40:57.890605 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 17 17:40:57.890669 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 17 17:40:57.890733 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:40:57.890797 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:40:57.890875 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 17 17:40:57.890942 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 17 17:40:57.891003 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 17:40:57.891060 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:40:57.891119 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 17:40:57.891129 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:40:57.891136 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:40:57.891143 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:40:57.891150 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:40:57.891159 kernel: iommu: Default domain type: Translated Mar 17 17:40:57.891166 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:40:57.891173 kernel: efivars: Registered efivars operations Mar 17 17:40:57.891180 kernel: vgaarb: loaded Mar 17 17:40:57.891187 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:40:57.891194 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:40:57.891201 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:40:57.891208 kernel: pnp: PnP ACPI init Mar 17 17:40:57.891283 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 17:40:57.891296 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:40:57.891303 kernel: NET: Registered PF_INET protocol family Mar 17 17:40:57.891310 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:40:57.891317 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:40:57.891324 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:40:57.891331 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:40:57.891338 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:40:57.891345 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:40:57.891354 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:40:57.891361 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:40:57.891368 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:40:57.891375 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:40:57.891382 kernel: kvm [1]: HYP mode not available Mar 17 17:40:57.891389 kernel: Initialise system trusted keyrings Mar 17 17:40:57.891396 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:40:57.891403 kernel: Key type asymmetric registered Mar 17 17:40:57.891409 kernel: Asymmetric key parser 'x509' registered Mar 17 17:40:57.891416 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:40:57.891425 kernel: io scheduler mq-deadline registered Mar 17 17:40:57.891431 kernel: io scheduler kyber registered Mar 17 17:40:57.891438 kernel: io scheduler bfq registered Mar 17 17:40:57.891445 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:40:57.891452 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:40:57.891459 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:40:57.891526 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 17 17:40:57.891536 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:40:57.891543 kernel: thunder_xcv, ver 1.0 Mar 17 17:40:57.891551 kernel: thunder_bgx, ver 1.0 Mar 17 17:40:57.891558 kernel: nicpf, ver 1.0 Mar 17 17:40:57.891565 kernel: nicvf, ver 1.0 Mar 17 17:40:57.891636 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:40:57.891697 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:40:57 UTC (1742233257) Mar 17 17:40:57.891706 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:40:57.891713 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 17:40:57.891720 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:40:57.891730 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:40:57.891737 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:40:57.891744 kernel: Segment Routing with IPv6 Mar 17 17:40:57.891750 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:40:57.891757 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:40:57.891764 kernel: Key type dns_resolver registered Mar 17 17:40:57.891771 kernel: registered taskstats version 1 Mar 17 17:40:57.891778 kernel: Loading compiled-in X.509 certificates Mar 17 17:40:57.891785 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: f4ff2820cf7379ce82b759137d15b536f0a99b51' Mar 17 17:40:57.891793 kernel: Key type .fscrypt registered Mar 17 17:40:57.891800 kernel: Key type fscrypt-provisioning registered Mar 17 17:40:57.891806 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:40:57.891813 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:40:57.891820 kernel: ima: No architecture policies found Mar 17 17:40:57.891827 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:40:57.891834 kernel: clk: Disabling unused clocks Mar 17 17:40:57.891840 kernel: Freeing unused kernel memory: 38336K Mar 17 17:40:57.891904 kernel: Run /init as init process Mar 17 17:40:57.891916 kernel: with arguments: Mar 17 17:40:57.891922 kernel: /init Mar 17 17:40:57.891929 kernel: with environment: Mar 17 17:40:57.891936 kernel: HOME=/ Mar 17 17:40:57.891942 kernel: TERM=linux Mar 17 17:40:57.891949 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:40:57.891957 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:40:57.891967 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:40:57.891976 systemd[1]: Detected virtualization kvm. Mar 17 17:40:57.891983 systemd[1]: Detected architecture arm64. Mar 17 17:40:57.891991 systemd[1]: Running in initrd. Mar 17 17:40:57.891998 systemd[1]: No hostname configured, using default hostname. Mar 17 17:40:57.892005 systemd[1]: Hostname set to . Mar 17 17:40:57.892013 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:40:57.892020 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:40:57.892027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:40:57.892036 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:40:57.892044 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:40:57.892052 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:40:57.892059 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:40:57.892067 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:40:57.892076 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:40:57.892085 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:40:57.892093 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:40:57.892100 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:40:57.892107 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:40:57.892115 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:40:57.892122 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:40:57.892130 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:40:57.892137 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:40:57.892144 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:40:57.892153 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:40:57.892161 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:40:57.892168 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:40:57.892176 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:40:57.892183 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:40:57.892191 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:40:57.892198 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:40:57.892205 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:40:57.892214 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:40:57.892222 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:40:57.892229 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:40:57.892236 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:40:57.892244 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:40:57.892251 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:40:57.892259 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:40:57.892268 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:40:57.892282 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:40:57.892290 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:40:57.892318 systemd-journald[237]: Collecting audit messages is disabled. Mar 17 17:40:57.892338 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:40:57.892346 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:40:57.892355 systemd-journald[237]: Journal started Mar 17 17:40:57.892372 systemd-journald[237]: Runtime Journal (/run/log/journal/1b93171f428346ad949c254c8f5aeb5f) is 5.9M, max 47.3M, 41.4M free. Mar 17 17:40:57.881266 systemd-modules-load[239]: Inserted module 'overlay' Mar 17 17:40:57.897507 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:40:57.897539 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:40:57.899610 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:40:57.902065 kernel: Bridge firewalling registered Mar 17 17:40:57.899739 systemd-modules-load[239]: Inserted module 'br_netfilter' Mar 17 17:40:57.902918 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:40:57.904560 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:40:57.907485 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:40:57.908750 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:40:57.911138 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:40:57.913974 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:40:57.915671 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:40:57.916805 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:40:57.920114 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:40:57.927511 dracut-cmdline[274]: dracut-dracut-053 Mar 17 17:40:57.929825 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:40:57.953325 systemd-resolved[279]: Positive Trust Anchors: Mar 17 17:40:57.953341 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:40:57.953371 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:40:57.957951 systemd-resolved[279]: Defaulting to hostname 'linux'. Mar 17 17:40:57.958952 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:40:57.960717 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:40:57.994869 kernel: SCSI subsystem initialized Mar 17 17:40:57.998862 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:40:58.006892 kernel: iscsi: registered transport (tcp) Mar 17 17:40:58.019041 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:40:58.019063 kernel: QLogic iSCSI HBA Driver Mar 17 17:40:58.062100 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:40:58.074982 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:40:58.091168 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:40:58.091200 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:40:58.092875 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:40:58.139869 kernel: raid6: neonx8 gen() 15791 MB/s Mar 17 17:40:58.156860 kernel: raid6: neonx4 gen() 15832 MB/s Mar 17 17:40:58.173862 kernel: raid6: neonx2 gen() 13223 MB/s Mar 17 17:40:58.190860 kernel: raid6: neonx1 gen() 10483 MB/s Mar 17 17:40:58.207861 kernel: raid6: int64x8 gen() 6788 MB/s Mar 17 17:40:58.224872 kernel: raid6: int64x4 gen() 7352 MB/s Mar 17 17:40:58.241860 kernel: raid6: int64x2 gen() 6115 MB/s Mar 17 17:40:58.258860 kernel: raid6: int64x1 gen() 5056 MB/s Mar 17 17:40:58.258874 kernel: raid6: using algorithm neonx4 gen() 15832 MB/s Mar 17 17:40:58.275866 kernel: raid6: .... xor() 12432 MB/s, rmw enabled Mar 17 17:40:58.275878 kernel: raid6: using neon recovery algorithm Mar 17 17:40:58.281141 kernel: xor: measuring software checksum speed Mar 17 17:40:58.281156 kernel: 8regs : 21590 MB/sec Mar 17 17:40:58.281166 kernel: 32regs : 21693 MB/sec Mar 17 17:40:58.282087 kernel: arm64_neon : 27955 MB/sec Mar 17 17:40:58.282103 kernel: xor: using function: arm64_neon (27955 MB/sec) Mar 17 17:40:58.332885 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:40:58.342886 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:40:58.353028 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:40:58.365705 systemd-udevd[462]: Using default interface naming scheme 'v255'. Mar 17 17:40:58.369404 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:40:58.371756 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:40:58.386217 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Mar 17 17:40:58.410779 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:40:58.419003 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:40:58.457931 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:40:58.470001 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:40:58.481411 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:40:58.483547 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:40:58.484984 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:40:58.487431 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:40:58.493997 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:40:58.506241 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 17 17:40:58.519053 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:40:58.519193 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:40:58.519213 kernel: GPT:9289727 != 19775487 Mar 17 17:40:58.519223 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:40:58.519232 kernel: GPT:9289727 != 19775487 Mar 17 17:40:58.519240 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:40:58.519250 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:40:58.508319 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:40:58.523341 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:40:58.523470 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:40:58.527297 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:40:58.528549 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:40:58.528734 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:40:58.532190 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:40:58.538604 kernel: BTRFS: device fsid 5ecee764-de70-4de1-8711-3798360e0d13 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (521) Mar 17 17:40:58.538633 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (529) Mar 17 17:40:58.541077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:40:58.556251 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:40:58.557425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:40:58.575009 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:40:58.581055 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:40:58.582013 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:40:58.590144 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:40:58.601990 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:40:58.603562 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:40:58.608080 disk-uuid[555]: Primary Header is updated. Mar 17 17:40:58.608080 disk-uuid[555]: Secondary Entries is updated. Mar 17 17:40:58.608080 disk-uuid[555]: Secondary Header is updated. Mar 17 17:40:58.611869 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:40:58.629841 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:40:59.622012 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:40:59.622709 disk-uuid[556]: The operation has completed successfully. Mar 17 17:40:59.647199 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:40:59.647289 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:40:59.679035 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:40:59.681460 sh[576]: Success Mar 17 17:40:59.692862 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:40:59.718877 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:40:59.730616 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:40:59.732623 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:40:59.742327 kernel: BTRFS info (device dm-0): first mount of filesystem 5ecee764-de70-4de1-8711-3798360e0d13 Mar 17 17:40:59.742358 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:40:59.742368 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:40:59.742378 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:40:59.743859 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:40:59.746611 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:40:59.747642 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:40:59.756968 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:40:59.758301 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:40:59.766236 kernel: BTRFS info (device vda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:40:59.766269 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:40:59.766280 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:40:59.768893 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:40:59.776874 kernel: BTRFS info (device vda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:40:59.781629 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:40:59.786011 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:40:59.842890 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:40:59.856030 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:40:59.871128 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:40:59.879736 ignition[674]: Ignition 2.20.0 Mar 17 17:40:59.879744 ignition[674]: Stage: fetch-offline Mar 17 17:40:59.879775 ignition[674]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:40:59.879783 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:40:59.879937 ignition[674]: parsed url from cmdline: "" Mar 17 17:40:59.879941 ignition[674]: no config URL provided Mar 17 17:40:59.879945 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:40:59.879952 ignition[674]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:40:59.879973 ignition[674]: op(1): [started] loading QEMU firmware config module Mar 17 17:40:59.884433 systemd-networkd[770]: lo: Link UP Mar 17 17:40:59.879978 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:40:59.884436 systemd-networkd[770]: lo: Gained carrier Mar 17 17:40:59.888700 systemd-networkd[770]: Enumeration completed Mar 17 17:40:59.888830 ignition[674]: op(1): [finished] loading QEMU firmware config module Mar 17 17:40:59.888978 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:40:59.889094 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:40:59.889098 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:40:59.889614 systemd-networkd[770]: eth0: Link UP Mar 17 17:40:59.889617 systemd-networkd[770]: eth0: Gained carrier Mar 17 17:40:59.889623 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:40:59.891119 systemd[1]: Reached target network.target - Network. Mar 17 17:40:59.911893 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:40:59.917910 ignition[674]: parsing config with SHA512: 393787365f90c3c246f4331513454cf28b0362bb5c44906b16b8feb456bff4f73f79c1d13b8669724ae1255480c88be8f8a411d27c7dd114a781c7b5a39bb589 Mar 17 17:40:59.922743 unknown[674]: fetched base config from "system" Mar 17 17:40:59.922757 unknown[674]: fetched user config from "qemu" Mar 17 17:40:59.923325 ignition[674]: fetch-offline: fetch-offline passed Mar 17 17:40:59.923650 ignition[674]: Ignition finished successfully Mar 17 17:40:59.926814 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:40:59.928220 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:40:59.936074 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:40:59.948693 ignition[778]: Ignition 2.20.0 Mar 17 17:40:59.948703 ignition[778]: Stage: kargs Mar 17 17:40:59.948879 ignition[778]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:40:59.948889 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:40:59.949702 ignition[778]: kargs: kargs passed Mar 17 17:40:59.949745 ignition[778]: Ignition finished successfully Mar 17 17:40:59.951969 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:40:59.961064 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:40:59.970372 ignition[787]: Ignition 2.20.0 Mar 17 17:40:59.970381 ignition[787]: Stage: disks Mar 17 17:40:59.970533 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:40:59.972740 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:40:59.970543 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:40:59.976163 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:40:59.971400 ignition[787]: disks: disks passed Mar 17 17:40:59.977556 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:40:59.971443 ignition[787]: Ignition finished successfully Mar 17 17:40:59.979445 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:40:59.980985 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:40:59.982286 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:40:59.990968 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:41:00.000242 systemd-fsck[800]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:41:00.003354 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:41:00.012931 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:41:00.055868 kernel: EXT4-fs (vda9): mounted filesystem 3914ef65-c5cd-468c-8ee7-964383d8e9e2 r/w with ordered data mode. Quota mode: none. Mar 17 17:41:00.055876 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:41:00.056881 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:41:00.069919 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:41:00.071334 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:41:00.072463 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:41:00.072500 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:41:00.077663 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (809) Mar 17 17:41:00.072521 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:41:00.078755 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:41:00.082034 kernel: BTRFS info (device vda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:41:00.082051 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:41:00.082061 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:41:00.082622 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:41:00.084860 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:41:00.085792 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:41:00.126023 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:41:00.129999 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:41:00.133755 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:41:00.137546 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:41:00.209739 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:41:00.218933 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:41:00.220275 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:41:00.224864 kernel: BTRFS info (device vda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:41:00.239057 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:41:00.241408 ignition[922]: INFO : Ignition 2.20.0 Mar 17 17:41:00.241408 ignition[922]: INFO : Stage: mount Mar 17 17:41:00.243458 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:41:00.243458 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:41:00.243458 ignition[922]: INFO : mount: mount passed Mar 17 17:41:00.243458 ignition[922]: INFO : Ignition finished successfully Mar 17 17:41:00.243810 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:41:00.252982 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:41:00.872072 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:41:00.883031 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:41:00.889453 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (935) Mar 17 17:41:00.889486 kernel: BTRFS info (device vda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:41:00.889496 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:41:00.890857 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:41:00.892866 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:41:00.893695 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:41:00.909454 ignition[952]: INFO : Ignition 2.20.0 Mar 17 17:41:00.909454 ignition[952]: INFO : Stage: files Mar 17 17:41:00.910792 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:41:00.910792 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:41:00.910792 ignition[952]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:41:00.913730 ignition[952]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:41:00.913730 ignition[952]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:41:00.913730 ignition[952]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:41:00.913730 ignition[952]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:41:00.917861 ignition[952]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:41:00.917861 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 17 17:41:00.917861 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Mar 17 17:41:00.913862 unknown[952]: wrote ssh authorized keys file for user: core Mar 17 17:41:00.959693 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:41:01.081990 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 17 17:41:01.083665 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:41:01.083665 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:41:01.083665 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:41:01.083665 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:41:01.083665 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:41:01.083665 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:41:01.083665 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:41:01.083665 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:41:01.083665 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:41:01.083665 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:41:01.083665 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:41:01.083665 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:41:01.083665 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:41:01.083665 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Mar 17 17:41:01.303245 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 17 17:41:01.455959 systemd-networkd[770]: eth0: Gained IPv6LL Mar 17 17:41:01.591285 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:41:01.591285 ignition[952]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 17 17:41:01.593804 ignition[952]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:41:01.593804 ignition[952]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:41:01.593804 ignition[952]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 17 17:41:01.593804 ignition[952]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 17 17:41:01.593804 ignition[952]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:41:01.593804 ignition[952]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:41:01.593804 ignition[952]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 17 17:41:01.593804 ignition[952]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:41:01.609224 ignition[952]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:41:01.612245 ignition[952]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:41:01.614460 ignition[952]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:41:01.614460 ignition[952]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:41:01.614460 ignition[952]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:41:01.614460 ignition[952]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:41:01.614460 ignition[952]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:41:01.614460 ignition[952]: INFO : files: files passed Mar 17 17:41:01.614460 ignition[952]: INFO : Ignition finished successfully Mar 17 17:41:01.616555 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:41:01.624991 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:41:01.626453 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:41:01.628202 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:41:01.628301 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:41:01.633883 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:41:01.636727 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:41:01.636727 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:41:01.639327 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:41:01.639707 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:41:01.641399 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:41:01.644011 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:41:01.665468 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:41:01.665605 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:41:01.667564 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:41:01.669046 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:41:01.670569 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:41:01.671317 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:41:01.685369 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:41:01.695010 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:41:01.702277 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:41:01.703500 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:41:01.705246 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:41:01.706772 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:41:01.706903 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:41:01.708945 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:41:01.710739 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:41:01.712258 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:41:01.713939 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:41:01.715657 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:41:01.717440 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:41:01.719066 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:41:01.720800 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:41:01.722527 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:41:01.724012 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:41:01.725335 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:41:01.725475 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:41:01.727509 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:41:01.729245 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:41:01.730972 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:41:01.732509 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:41:01.734718 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:41:01.734841 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:41:01.737100 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:41:01.737223 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:41:01.739032 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:41:01.740501 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:41:01.741871 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:41:01.744088 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:41:01.745089 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:41:01.746510 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:41:01.746597 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:41:01.748127 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:41:01.748205 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:41:01.749548 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:41:01.749661 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:41:01.751180 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:41:01.751286 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:41:01.766032 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:41:01.766970 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:41:01.767101 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:41:01.769982 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:41:01.770828 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:41:01.770961 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:41:01.772626 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:41:01.772804 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:41:01.776358 ignition[1007]: INFO : Ignition 2.20.0 Mar 17 17:41:01.776358 ignition[1007]: INFO : Stage: umount Mar 17 17:41:01.777580 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:41:01.777580 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:41:01.777580 ignition[1007]: INFO : umount: umount passed Mar 17 17:41:01.777580 ignition[1007]: INFO : Ignition finished successfully Mar 17 17:41:01.780393 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:41:01.780539 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:41:01.782794 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:41:01.785835 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:41:01.785975 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:41:01.788763 systemd[1]: Stopped target network.target - Network. Mar 17 17:41:01.789554 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:41:01.789615 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:41:01.790901 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:41:01.790947 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:41:01.792435 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:41:01.792476 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:41:01.793748 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:41:01.793785 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:41:01.795424 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:41:01.796585 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:41:01.804142 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:41:01.804272 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:41:01.807952 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:41:01.808194 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:41:01.808296 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:41:01.811541 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:41:01.812167 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:41:01.812219 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:41:01.821940 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:41:01.822631 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:41:01.822692 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:41:01.824478 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:41:01.824521 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:41:01.828305 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:41:01.828358 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:41:01.829247 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:41:01.829287 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:41:01.832139 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:41:01.840165 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:41:01.840309 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:41:01.842723 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:41:01.842777 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:41:01.843911 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:41:01.843941 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:41:01.845460 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:41:01.845509 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:41:01.847502 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:41:01.847544 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:41:01.849572 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:41:01.849620 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:41:01.869019 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:41:01.869801 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:41:01.869880 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:41:01.872181 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:41:01.872222 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:41:01.874630 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:41:01.874677 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:41:01.874712 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:41:01.874751 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:41:01.875127 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:41:01.875233 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:41:01.876213 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:41:01.876306 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:41:01.877723 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:41:01.877802 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:41:01.880694 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:41:01.881776 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:41:01.881844 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:41:01.883890 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:41:01.893023 systemd[1]: Switching root. Mar 17 17:41:01.921689 systemd-journald[237]: Journal stopped Mar 17 17:41:02.667100 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Mar 17 17:41:02.667153 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:41:02.667165 kernel: SELinux: policy capability open_perms=1 Mar 17 17:41:02.667175 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:41:02.667184 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:41:02.667193 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:41:02.667203 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:41:02.667212 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:41:02.667225 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:41:02.667239 kernel: audit: type=1403 audit(1742233262.088:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:41:02.667253 systemd[1]: Successfully loaded SELinux policy in 38.314ms. Mar 17 17:41:02.667269 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.030ms. Mar 17 17:41:02.667282 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:41:02.667293 systemd[1]: Detected virtualization kvm. Mar 17 17:41:02.667304 systemd[1]: Detected architecture arm64. Mar 17 17:41:02.667314 systemd[1]: Detected first boot. Mar 17 17:41:02.667326 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:41:02.667337 zram_generator::config[1053]: No configuration found. Mar 17 17:41:02.667351 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:41:02.667370 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:41:02.667383 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:41:02.667396 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:41:02.667406 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:41:02.667417 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:41:02.667428 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:41:02.667441 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:41:02.667452 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:41:02.667463 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:41:02.667473 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:41:02.667484 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:41:02.667494 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:41:02.667505 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:41:02.667516 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:41:02.667529 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:41:02.667539 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:41:02.667551 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:41:02.667561 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:41:02.667572 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:41:02.667583 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 17:41:02.667593 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:41:02.667605 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:41:02.667617 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:41:02.667628 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:41:02.667639 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:41:02.667649 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:41:02.667661 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:41:02.667672 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:41:02.667682 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:41:02.667693 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:41:02.667704 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:41:02.667716 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:41:02.667727 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:41:02.667738 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:41:02.667748 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:41:02.667759 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:41:02.667770 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:41:02.667781 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:41:02.667792 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:41:02.667802 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:41:02.667814 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:41:02.667825 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:41:02.667836 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:41:02.667856 systemd[1]: Reached target machines.target - Containers. Mar 17 17:41:02.667869 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:41:02.667881 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:41:02.667893 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:41:02.667903 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:41:02.667916 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:41:02.667927 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:41:02.667937 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:41:02.667948 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:41:02.667958 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:41:02.667969 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:41:02.667979 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:41:02.667990 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:41:02.668000 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:41:02.668012 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:41:02.668023 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:41:02.668034 kernel: ACPI: bus type drm_connector registered Mar 17 17:41:02.668044 kernel: loop: module loaded Mar 17 17:41:02.668053 kernel: fuse: init (API version 7.39) Mar 17 17:41:02.668063 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:41:02.668073 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:41:02.668084 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:41:02.668094 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:41:02.668107 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:41:02.668118 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:41:02.668129 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:41:02.668139 systemd[1]: Stopped verity-setup.service. Mar 17 17:41:02.668169 systemd-journald[1135]: Collecting audit messages is disabled. Mar 17 17:41:02.668191 systemd-journald[1135]: Journal started Mar 17 17:41:02.668212 systemd-journald[1135]: Runtime Journal (/run/log/journal/1b93171f428346ad949c254c8f5aeb5f) is 5.9M, max 47.3M, 41.4M free. Mar 17 17:41:02.478898 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:41:02.490682 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:41:02.491089 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:41:02.669874 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:41:02.670482 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:41:02.671381 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:41:02.672287 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:41:02.673190 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:41:02.674174 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:41:02.675057 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:41:02.676048 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:41:02.677195 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:41:02.678402 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:41:02.678567 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:41:02.679691 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:41:02.679885 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:41:02.680928 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:41:02.681098 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:41:02.682084 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:41:02.682230 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:41:02.683394 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:41:02.683552 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:41:02.684622 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:41:02.684786 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:41:02.686022 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:41:02.687085 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:41:02.688454 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:41:02.689779 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:41:02.701747 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:41:02.712940 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:41:02.714723 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:41:02.715627 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:41:02.715656 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:41:02.717482 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:41:02.719494 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:41:02.721331 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:41:02.722229 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:41:02.723538 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:41:02.725253 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:41:02.726259 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:41:02.730022 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:41:02.730998 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:41:02.731839 systemd-journald[1135]: Time spent on flushing to /var/log/journal/1b93171f428346ad949c254c8f5aeb5f is 24.028ms for 866 entries. Mar 17 17:41:02.731839 systemd-journald[1135]: System Journal (/var/log/journal/1b93171f428346ad949c254c8f5aeb5f) is 8M, max 195.6M, 187.6M free. Mar 17 17:41:02.768989 systemd-journald[1135]: Received client request to flush runtime journal. Mar 17 17:41:02.733130 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:41:02.737162 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:41:02.739706 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:41:02.744879 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:41:02.747160 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:41:02.748448 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:41:02.749578 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:41:02.750880 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:41:02.754072 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:41:02.765047 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:41:02.767035 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:41:02.771350 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:41:02.774056 kernel: loop0: detected capacity change from 0 to 113512 Mar 17 17:41:02.783324 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:41:02.785043 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:41:02.792083 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:41:02.792870 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:41:02.795591 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:41:02.801537 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:41:02.818108 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 17 17:41:02.818127 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 17 17:41:02.822987 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:41:02.848872 kernel: loop1: detected capacity change from 0 to 201592 Mar 17 17:41:02.890058 kernel: loop2: detected capacity change from 0 to 123192 Mar 17 17:41:02.937876 kernel: loop3: detected capacity change from 0 to 113512 Mar 17 17:41:02.944927 kernel: loop4: detected capacity change from 0 to 201592 Mar 17 17:41:02.951870 kernel: loop5: detected capacity change from 0 to 123192 Mar 17 17:41:02.955514 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:41:02.955906 (sd-merge)[1195]: Merged extensions into '/usr'. Mar 17 17:41:02.959158 systemd[1]: Reload requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:41:02.959173 systemd[1]: Reloading... Mar 17 17:41:02.995877 zram_generator::config[1223]: No configuration found. Mar 17 17:41:02.997601 ldconfig[1165]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:41:03.093831 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:41:03.142618 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:41:03.143271 systemd[1]: Reloading finished in 183 ms. Mar 17 17:41:03.168648 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:41:03.171869 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:41:03.190014 systemd[1]: Starting ensure-sysext.service... Mar 17 17:41:03.191583 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:41:03.197034 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:41:03.201061 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:41:03.203969 systemd[1]: Reload requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:41:03.203984 systemd[1]: Reloading... Mar 17 17:41:03.206590 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:41:03.206792 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:41:03.207533 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:41:03.207743 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 17 17:41:03.207793 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 17 17:41:03.210490 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:41:03.210503 systemd-tmpfiles[1258]: Skipping /boot Mar 17 17:41:03.218617 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:41:03.218633 systemd-tmpfiles[1258]: Skipping /boot Mar 17 17:41:03.225397 systemd-udevd[1261]: Using default interface naming scheme 'v255'. Mar 17 17:41:03.262875 zram_generator::config[1296]: No configuration found. Mar 17 17:41:03.293903 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1303) Mar 17 17:41:03.357243 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:41:03.425604 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 17 17:41:03.425763 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:41:03.427294 systemd[1]: Reloading finished in 223 ms. Mar 17 17:41:03.436333 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:41:03.452929 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:41:03.471122 systemd[1]: Finished ensure-sysext.service. Mar 17 17:41:03.472235 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:41:03.499022 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:41:03.501136 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:41:03.502326 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:41:03.503342 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:41:03.507901 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:41:03.510889 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:41:03.514774 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:41:03.518024 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:41:03.519170 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:41:03.525131 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:41:03.526322 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:41:03.528531 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:41:03.529161 lvm[1355]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:41:03.532110 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:41:03.537841 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:41:03.542041 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:41:03.545053 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:41:03.551316 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:41:03.554858 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:41:03.555003 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:41:03.558268 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:41:03.558431 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:41:03.559487 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:41:03.559627 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:41:03.560952 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:41:03.561099 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:41:03.562222 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:41:03.566405 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:41:03.567784 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:41:03.569255 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:41:03.572178 augenrules[1388]: No rules Mar 17 17:41:03.573178 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:41:03.573367 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:41:03.580975 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:41:03.582482 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:41:03.589013 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:41:03.589797 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:41:03.589881 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:41:03.590949 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:41:03.593016 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:41:03.593734 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:41:03.594178 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:41:03.594391 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:41:03.602899 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:41:03.626741 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:41:03.640077 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:41:03.687404 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:41:03.688493 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:41:03.693361 systemd-networkd[1367]: lo: Link UP Mar 17 17:41:03.693369 systemd-networkd[1367]: lo: Gained carrier Mar 17 17:41:03.696129 systemd-resolved[1371]: Positive Trust Anchors: Mar 17 17:41:03.696146 systemd-resolved[1371]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:41:03.696178 systemd-resolved[1371]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:41:03.697362 systemd-networkd[1367]: Enumeration completed Mar 17 17:41:03.697458 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:41:03.697802 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:41:03.697809 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:41:03.698230 systemd-networkd[1367]: eth0: Link UP Mar 17 17:41:03.698237 systemd-networkd[1367]: eth0: Gained carrier Mar 17 17:41:03.698250 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:41:03.703832 systemd-resolved[1371]: Defaulting to hostname 'linux'. Mar 17 17:41:03.709998 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:41:03.711773 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:41:03.712749 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:41:03.713932 systemd[1]: Reached target network.target - Network. Mar 17 17:41:03.714591 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:41:03.715528 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:41:03.716339 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:41:03.717220 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:41:03.718242 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:41:03.719106 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:41:03.720031 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:41:03.720914 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:41:03.720936 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:41:03.721565 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:41:03.722884 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:41:03.723898 systemd-networkd[1367]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:41:03.724834 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:41:03.727261 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:41:03.728328 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:41:03.728463 systemd-timesyncd[1372]: Network configuration changed, trying to establish connection. Mar 17 17:41:03.729236 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:41:03.729778 systemd-timesyncd[1372]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:41:03.729820 systemd-timesyncd[1372]: Initial clock synchronization to Mon 2025-03-17 17:41:04.019838 UTC. Mar 17 17:41:03.731996 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:41:03.733094 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:41:03.735889 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:41:03.736931 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:41:03.738278 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:41:03.739038 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:41:03.739734 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:41:03.739766 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:41:03.740770 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:41:03.742453 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:41:03.744009 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:41:03.747052 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:41:03.747824 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:41:03.749011 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:41:03.753743 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:41:03.754612 jq[1427]: false Mar 17 17:41:03.755403 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:41:03.757440 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:41:03.763033 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:41:03.765123 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:41:03.765510 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:41:03.767566 extend-filesystems[1428]: Found loop3 Mar 17 17:41:03.767566 extend-filesystems[1428]: Found loop4 Mar 17 17:41:03.767566 extend-filesystems[1428]: Found loop5 Mar 17 17:41:03.767566 extend-filesystems[1428]: Found vda Mar 17 17:41:03.767566 extend-filesystems[1428]: Found vda1 Mar 17 17:41:03.778330 extend-filesystems[1428]: Found vda2 Mar 17 17:41:03.778330 extend-filesystems[1428]: Found vda3 Mar 17 17:41:03.778330 extend-filesystems[1428]: Found usr Mar 17 17:41:03.778330 extend-filesystems[1428]: Found vda4 Mar 17 17:41:03.778330 extend-filesystems[1428]: Found vda6 Mar 17 17:41:03.778330 extend-filesystems[1428]: Found vda7 Mar 17 17:41:03.778330 extend-filesystems[1428]: Found vda9 Mar 17 17:41:03.778330 extend-filesystems[1428]: Checking size of /dev/vda9 Mar 17 17:41:03.776890 dbus-daemon[1426]: [system] SELinux support is enabled Mar 17 17:41:03.771143 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:41:03.787365 extend-filesystems[1428]: Resized partition /dev/vda9 Mar 17 17:41:03.772974 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:41:03.777098 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:41:03.779707 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:41:03.780953 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:41:03.781224 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:41:03.781370 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:41:03.787318 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:41:03.788899 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:41:03.805415 (ntainerd)[1452]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:41:03.808053 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:41:03.812977 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:41:03.807045 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:41:03.813049 jq[1446]: true Mar 17 17:41:03.807073 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:41:03.809067 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:41:03.809084 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:41:03.826786 update_engine[1437]: I20250317 17:41:03.825661 1437 main.cc:92] Flatcar Update Engine starting Mar 17 17:41:03.829079 tar[1450]: linux-arm64/LICENSE Mar 17 17:41:03.855262 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1295) Mar 17 17:41:03.855295 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:41:03.855307 tar[1450]: linux-arm64/helm Mar 17 17:41:03.833176 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:41:03.855403 update_engine[1437]: I20250317 17:41:03.839879 1437 update_check_scheduler.cc:74] Next update check in 6m3s Mar 17 17:41:03.848111 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:41:03.855484 jq[1460]: true Mar 17 17:41:03.862343 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:41:03.862343 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:41:03.862343 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:41:03.875022 extend-filesystems[1428]: Resized filesystem in /dev/vda9 Mar 17 17:41:03.870916 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:41:03.871100 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:41:03.871656 systemd-logind[1434]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:41:03.872757 systemd-logind[1434]: New seat seat0. Mar 17 17:41:03.878039 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:41:03.917303 locksmithd[1463]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:41:03.923166 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:41:03.924695 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:41:03.926844 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:41:04.011437 containerd[1452]: time="2025-03-17T17:41:04.011347157Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:41:04.036163 containerd[1452]: time="2025-03-17T17:41:04.035951200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:04.037350 containerd[1452]: time="2025-03-17T17:41:04.037315340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:41:04.037350 containerd[1452]: time="2025-03-17T17:41:04.037347422Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:41:04.037435 containerd[1452]: time="2025-03-17T17:41:04.037363422Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:41:04.037560 containerd[1452]: time="2025-03-17T17:41:04.037514054Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:41:04.037560 containerd[1452]: time="2025-03-17T17:41:04.037543235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:04.037623 containerd[1452]: time="2025-03-17T17:41:04.037603919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:41:04.037656 containerd[1452]: time="2025-03-17T17:41:04.037621618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:04.037874 containerd[1452]: time="2025-03-17T17:41:04.037849845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:41:04.037917 containerd[1452]: time="2025-03-17T17:41:04.037905638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:04.037948 containerd[1452]: time="2025-03-17T17:41:04.037921306Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:41:04.037948 containerd[1452]: time="2025-03-17T17:41:04.037931213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:04.038037 containerd[1452]: time="2025-03-17T17:41:04.038019129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:04.038237 containerd[1452]: time="2025-03-17T17:41:04.038218217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:04.038367 containerd[1452]: time="2025-03-17T17:41:04.038348993Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:41:04.038394 containerd[1452]: time="2025-03-17T17:41:04.038367232Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:41:04.038469 containerd[1452]: time="2025-03-17T17:41:04.038455480Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:41:04.038556 containerd[1452]: time="2025-03-17T17:41:04.038498713Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:41:04.042211 containerd[1452]: time="2025-03-17T17:41:04.042179361Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:41:04.042316 containerd[1452]: time="2025-03-17T17:41:04.042236811Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:41:04.042316 containerd[1452]: time="2025-03-17T17:41:04.042253889Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:41:04.042316 containerd[1452]: time="2025-03-17T17:41:04.042270925Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:41:04.042316 containerd[1452]: time="2025-03-17T17:41:04.042299112Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:41:04.043572 containerd[1452]: time="2025-03-17T17:41:04.042897617Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:41:04.043572 containerd[1452]: time="2025-03-17T17:41:04.043216206Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:41:04.043572 containerd[1452]: time="2025-03-17T17:41:04.043338154Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:41:04.043572 containerd[1452]: time="2025-03-17T17:41:04.043356558Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:41:04.043572 containerd[1452]: time="2025-03-17T17:41:04.043381843Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:41:04.043572 containerd[1452]: time="2025-03-17T17:41:04.043398340Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:41:04.043572 containerd[1452]: time="2025-03-17T17:41:04.043417117Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:41:04.043572 containerd[1452]: time="2025-03-17T17:41:04.043433863Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:41:04.043572 containerd[1452]: time="2025-03-17T17:41:04.043452931Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:41:04.043572 containerd[1452]: time="2025-03-17T17:41:04.043471956Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:41:04.043572 containerd[1452]: time="2025-03-17T17:41:04.043490236Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:41:04.043572 containerd[1452]: time="2025-03-17T17:41:04.043506816Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:41:04.043572 containerd[1452]: time="2025-03-17T17:41:04.043523065Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:41:04.043572 containerd[1452]: time="2025-03-17T17:41:04.043555024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:41:04.043865 containerd[1452]: time="2025-03-17T17:41:04.043574422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:41:04.043865 containerd[1452]: time="2025-03-17T17:41:04.043592080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:41:04.043865 containerd[1452]: time="2025-03-17T17:41:04.043609282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:41:04.043865 containerd[1452]: time="2025-03-17T17:41:04.043626070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:41:04.043865 containerd[1452]: time="2025-03-17T17:41:04.043643769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:41:04.043865 containerd[1452]: time="2025-03-17T17:41:04.043657282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:41:04.043865 containerd[1452]: time="2025-03-17T17:41:04.043676184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:41:04.043865 containerd[1452]: time="2025-03-17T17:41:04.043692847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:41:04.043865 containerd[1452]: time="2025-03-17T17:41:04.043712867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:41:04.043865 containerd[1452]: time="2025-03-17T17:41:04.043729987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:41:04.043865 containerd[1452]: time="2025-03-17T17:41:04.043745240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:41:04.043865 containerd[1452]: time="2025-03-17T17:41:04.043761779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:41:04.043865 containerd[1452]: time="2025-03-17T17:41:04.043780100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:41:04.043865 containerd[1452]: time="2025-03-17T17:41:04.043812225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:41:04.043865 containerd[1452]: time="2025-03-17T17:41:04.043831706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:41:04.044180 containerd[1452]: time="2025-03-17T17:41:04.043848950Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:41:04.044180 containerd[1452]: time="2025-03-17T17:41:04.044080949Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:41:04.044180 containerd[1452]: time="2025-03-17T17:41:04.044105114Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:41:04.044180 containerd[1452]: time="2025-03-17T17:41:04.044120990Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:41:04.044180 containerd[1452]: time="2025-03-17T17:41:04.044137487Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:41:04.044180 containerd[1452]: time="2025-03-17T17:41:04.044149425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:41:04.044180 containerd[1452]: time="2025-03-17T17:41:04.044169860Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:41:04.044308 containerd[1452]: time="2025-03-17T17:41:04.044184907Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:41:04.044308 containerd[1452]: time="2025-03-17T17:41:04.044200285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:41:04.044883 containerd[1452]: time="2025-03-17T17:41:04.044493424Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:41:04.044883 containerd[1452]: time="2025-03-17T17:41:04.044555724Z" level=info msg="Connect containerd service" Mar 17 17:41:04.044883 containerd[1452]: time="2025-03-17T17:41:04.044597548Z" level=info msg="using legacy CRI server" Mar 17 17:41:04.044883 containerd[1452]: time="2025-03-17T17:41:04.044609900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:41:04.044883 containerd[1452]: time="2025-03-17T17:41:04.044864946Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:41:04.045765 containerd[1452]: time="2025-03-17T17:41:04.045734828Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:41:04.046601 containerd[1452]: time="2025-03-17T17:41:04.046490514Z" level=info msg="Start subscribing containerd event" Mar 17 17:41:04.046601 containerd[1452]: time="2025-03-17T17:41:04.046554970Z" level=info msg="Start recovering state" Mar 17 17:41:04.046971 containerd[1452]: time="2025-03-17T17:41:04.046685954Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:41:04.046971 containerd[1452]: time="2025-03-17T17:41:04.046743694Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:41:04.047265 containerd[1452]: time="2025-03-17T17:41:04.047210967Z" level=info msg="Start event monitor" Mar 17 17:41:04.047393 containerd[1452]: time="2025-03-17T17:41:04.047375733Z" level=info msg="Start snapshots syncer" Mar 17 17:41:04.047830 containerd[1452]: time="2025-03-17T17:41:04.047527940Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:41:04.047830 containerd[1452]: time="2025-03-17T17:41:04.047544603Z" level=info msg="Start streaming server" Mar 17 17:41:04.047830 containerd[1452]: time="2025-03-17T17:41:04.047689846Z" level=info msg="containerd successfully booted in 0.038322s" Mar 17 17:41:04.047758 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:41:04.056656 sshd_keygen[1445]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:41:04.075918 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:41:04.091217 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:41:04.096554 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:41:04.096792 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:41:04.099216 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:41:04.109936 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:41:04.112387 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:41:04.114388 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 17:41:04.115398 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:41:04.233585 tar[1450]: linux-arm64/README.md Mar 17 17:41:04.247340 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:41:05.297643 systemd-networkd[1367]: eth0: Gained IPv6LL Mar 17 17:41:05.303945 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:41:05.306024 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:41:05.322114 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:41:05.324293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:05.326142 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:41:05.339657 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:41:05.339862 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:41:05.341082 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:41:05.350924 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:41:05.849281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:05.850524 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:41:05.853729 (kubelet)[1540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:41:05.856934 systemd[1]: Startup finished in 525ms (kernel) + 4.368s (initrd) + 3.826s (userspace) = 8.720s. Mar 17 17:41:06.272053 kubelet[1540]: E0317 17:41:06.271932 1540 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:41:06.274593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:41:06.274756 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:41:06.275992 systemd[1]: kubelet.service: Consumed 790ms CPU time, 248.2M memory peak. Mar 17 17:41:10.482339 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:41:10.483499 systemd[1]: Started sshd@0-10.0.0.85:22-10.0.0.1:51108.service - OpenSSH per-connection server daemon (10.0.0.1:51108). Mar 17 17:41:10.556728 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 51108 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:41:10.558466 sshd-session[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:10.567132 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:41:10.580105 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:41:10.585288 systemd-logind[1434]: New session 1 of user core. Mar 17 17:41:10.589353 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:41:10.591815 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:41:10.598026 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:41:10.600251 systemd-logind[1434]: New session c1 of user core. Mar 17 17:41:10.706354 systemd[1557]: Queued start job for default target default.target. Mar 17 17:41:10.716800 systemd[1557]: Created slice app.slice - User Application Slice. Mar 17 17:41:10.716828 systemd[1557]: Reached target paths.target - Paths. Mar 17 17:41:10.716884 systemd[1557]: Reached target timers.target - Timers. Mar 17 17:41:10.718121 systemd[1557]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:41:10.726998 systemd[1557]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:41:10.727061 systemd[1557]: Reached target sockets.target - Sockets. Mar 17 17:41:10.727100 systemd[1557]: Reached target basic.target - Basic System. Mar 17 17:41:10.727127 systemd[1557]: Reached target default.target - Main User Target. Mar 17 17:41:10.727152 systemd[1557]: Startup finished in 121ms. Mar 17 17:41:10.727315 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:41:10.728650 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:41:10.789275 systemd[1]: Started sshd@1-10.0.0.85:22-10.0.0.1:51116.service - OpenSSH per-connection server daemon (10.0.0.1:51116). Mar 17 17:41:10.827940 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 51116 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:41:10.829120 sshd-session[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:10.833685 systemd-logind[1434]: New session 2 of user core. Mar 17 17:41:10.848099 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:41:10.900079 sshd[1570]: Connection closed by 10.0.0.1 port 51116 Mar 17 17:41:10.900545 sshd-session[1568]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:10.918523 systemd[1]: sshd@1-10.0.0.85:22-10.0.0.1:51116.service: Deactivated successfully. Mar 17 17:41:10.920448 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:41:10.921115 systemd-logind[1434]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:41:10.932237 systemd[1]: Started sshd@2-10.0.0.85:22-10.0.0.1:51124.service - OpenSSH per-connection server daemon (10.0.0.1:51124). Mar 17 17:41:10.933151 systemd-logind[1434]: Removed session 2. Mar 17 17:41:10.966418 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 51124 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:41:10.967592 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:10.971717 systemd-logind[1434]: New session 3 of user core. Mar 17 17:41:10.983034 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:41:11.031638 sshd[1578]: Connection closed by 10.0.0.1 port 51124 Mar 17 17:41:11.031982 sshd-session[1575]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:11.052132 systemd[1]: sshd@2-10.0.0.85:22-10.0.0.1:51124.service: Deactivated successfully. Mar 17 17:41:11.053670 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:41:11.054873 systemd-logind[1434]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:41:11.056088 systemd[1]: Started sshd@3-10.0.0.85:22-10.0.0.1:51128.service - OpenSSH per-connection server daemon (10.0.0.1:51128). Mar 17 17:41:11.056942 systemd-logind[1434]: Removed session 3. Mar 17 17:41:11.094630 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 51128 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:41:11.095804 sshd-session[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:11.101216 systemd-logind[1434]: New session 4 of user core. Mar 17 17:41:11.110035 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:41:11.161179 sshd[1586]: Connection closed by 10.0.0.1 port 51128 Mar 17 17:41:11.161583 sshd-session[1583]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:11.171269 systemd[1]: sshd@3-10.0.0.85:22-10.0.0.1:51128.service: Deactivated successfully. Mar 17 17:41:11.174555 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:41:11.176140 systemd-logind[1434]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:41:11.177354 systemd[1]: Started sshd@4-10.0.0.85:22-10.0.0.1:51130.service - OpenSSH per-connection server daemon (10.0.0.1:51130). Mar 17 17:41:11.178087 systemd-logind[1434]: Removed session 4. Mar 17 17:41:11.216285 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 51130 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:41:11.217564 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:11.221974 systemd-logind[1434]: New session 5 of user core. Mar 17 17:41:11.232144 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:41:11.294359 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:41:11.294634 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:41:11.667289 (dockerd)[1615]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:41:11.667430 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:41:11.925065 dockerd[1615]: time="2025-03-17T17:41:11.924937136Z" level=info msg="Starting up" Mar 17 17:41:12.099376 dockerd[1615]: time="2025-03-17T17:41:12.099323686Z" level=info msg="Loading containers: start." Mar 17 17:41:12.241897 kernel: Initializing XFRM netlink socket Mar 17 17:41:12.319516 systemd-networkd[1367]: docker0: Link UP Mar 17 17:41:12.350160 dockerd[1615]: time="2025-03-17T17:41:12.350120771Z" level=info msg="Loading containers: done." Mar 17 17:41:12.363706 dockerd[1615]: time="2025-03-17T17:41:12.363657172Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:41:12.363855 dockerd[1615]: time="2025-03-17T17:41:12.363747889Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 17:41:12.363959 dockerd[1615]: time="2025-03-17T17:41:12.363930334Z" level=info msg="Daemon has completed initialization" Mar 17 17:41:12.391446 dockerd[1615]: time="2025-03-17T17:41:12.391324978Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:41:12.391548 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:41:12.942128 containerd[1452]: time="2025-03-17T17:41:12.942064977Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 17 17:41:13.619052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3838743148.mount: Deactivated successfully. Mar 17 17:41:14.601074 containerd[1452]: time="2025-03-17T17:41:14.601018116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:14.601569 containerd[1452]: time="2025-03-17T17:41:14.601534839Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=26231952" Mar 17 17:41:14.602192 containerd[1452]: time="2025-03-17T17:41:14.602145329Z" level=info msg="ImageCreate event name:\"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:14.605377 containerd[1452]: time="2025-03-17T17:41:14.605328087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:14.607441 containerd[1452]: time="2025-03-17T17:41:14.606894167Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"26228750\" in 1.664784422s" Mar 17 17:41:14.607441 containerd[1452]: time="2025-03-17T17:41:14.606930308Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\"" Mar 17 17:41:14.608011 containerd[1452]: time="2025-03-17T17:41:14.607989114Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 17 17:41:15.837765 containerd[1452]: time="2025-03-17T17:41:15.837714813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:15.838199 containerd[1452]: time="2025-03-17T17:41:15.838155138Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=22530034" Mar 17 17:41:15.839125 containerd[1452]: time="2025-03-17T17:41:15.839082254Z" level=info msg="ImageCreate event name:\"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:15.841912 containerd[1452]: time="2025-03-17T17:41:15.841837262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:15.842999 containerd[1452]: time="2025-03-17T17:41:15.842965119Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"23970828\" in 1.23484223s" Mar 17 17:41:15.842999 containerd[1452]: time="2025-03-17T17:41:15.842998717Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\"" Mar 17 17:41:15.843462 containerd[1452]: time="2025-03-17T17:41:15.843423030Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 17 17:41:16.398316 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:41:16.409015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:16.522308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:16.523595 (kubelet)[1874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:41:16.559153 kubelet[1874]: E0317 17:41:16.559105 1874 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:41:16.561893 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:41:16.562017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:41:16.562366 systemd[1]: kubelet.service: Consumed 125ms CPU time, 103.3M memory peak. Mar 17 17:41:17.052445 containerd[1452]: time="2025-03-17T17:41:17.052397877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:17.053275 containerd[1452]: time="2025-03-17T17:41:17.052873497Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=17482563" Mar 17 17:41:17.054364 containerd[1452]: time="2025-03-17T17:41:17.054332926Z" level=info msg="ImageCreate event name:\"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:17.056952 containerd[1452]: time="2025-03-17T17:41:17.056919191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:17.058093 containerd[1452]: time="2025-03-17T17:41:17.058059231Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"18923375\" in 1.214606168s" Mar 17 17:41:17.058146 containerd[1452]: time="2025-03-17T17:41:17.058098118Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\"" Mar 17 17:41:17.058795 containerd[1452]: time="2025-03-17T17:41:17.058517703Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 17:41:18.082452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3386179718.mount: Deactivated successfully. Mar 17 17:41:18.312157 containerd[1452]: time="2025-03-17T17:41:18.312108550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:18.313098 containerd[1452]: time="2025-03-17T17:41:18.312876582Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=27370097" Mar 17 17:41:18.313726 containerd[1452]: time="2025-03-17T17:41:18.313687612Z" level=info msg="ImageCreate event name:\"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:18.316892 containerd[1452]: time="2025-03-17T17:41:18.316831979Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"27369114\" in 1.258289002s" Mar 17 17:41:18.317018 containerd[1452]: time="2025-03-17T17:41:18.316998224Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\"" Mar 17 17:41:18.318216 containerd[1452]: time="2025-03-17T17:41:18.317965845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:18.318440 containerd[1452]: time="2025-03-17T17:41:18.318398612Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 17 17:41:18.863537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3284328607.mount: Deactivated successfully. Mar 17 17:41:19.646349 containerd[1452]: time="2025-03-17T17:41:19.646283302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:19.646792 containerd[1452]: time="2025-03-17T17:41:19.646744508Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Mar 17 17:41:19.648008 containerd[1452]: time="2025-03-17T17:41:19.647567034Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:19.653811 containerd[1452]: time="2025-03-17T17:41:19.653743189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:19.655025 containerd[1452]: time="2025-03-17T17:41:19.654984714Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.336544236s" Mar 17 17:41:19.655025 containerd[1452]: time="2025-03-17T17:41:19.655016549Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Mar 17 17:41:19.655617 containerd[1452]: time="2025-03-17T17:41:19.655437962Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 17:41:20.131324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount269767027.mount: Deactivated successfully. Mar 17 17:41:20.137980 containerd[1452]: time="2025-03-17T17:41:20.137934734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:20.138607 containerd[1452]: time="2025-03-17T17:41:20.138365169Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Mar 17 17:41:20.139285 containerd[1452]: time="2025-03-17T17:41:20.139245482Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:20.142514 containerd[1452]: time="2025-03-17T17:41:20.141584776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:20.142869 containerd[1452]: time="2025-03-17T17:41:20.142475815Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 487.006867ms" Mar 17 17:41:20.142916 containerd[1452]: time="2025-03-17T17:41:20.142899421Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 17 17:41:20.143685 containerd[1452]: time="2025-03-17T17:41:20.143650142Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 17 17:41:20.671156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744447825.mount: Deactivated successfully. Mar 17 17:41:22.634170 containerd[1452]: time="2025-03-17T17:41:22.634101756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:22.636462 containerd[1452]: time="2025-03-17T17:41:22.636401189Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Mar 17 17:41:22.637501 containerd[1452]: time="2025-03-17T17:41:22.637442351Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:22.640773 containerd[1452]: time="2025-03-17T17:41:22.640737398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:22.642268 containerd[1452]: time="2025-03-17T17:41:22.642214584Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.498526411s" Mar 17 17:41:22.642268 containerd[1452]: time="2025-03-17T17:41:22.642257886Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Mar 17 17:41:26.030487 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:26.030781 systemd[1]: kubelet.service: Consumed 125ms CPU time, 103.3M memory peak. Mar 17 17:41:26.041104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:26.066018 systemd[1]: Reload requested from client PID 2035 ('systemctl') (unit session-5.scope)... Mar 17 17:41:26.066035 systemd[1]: Reloading... Mar 17 17:41:26.136991 zram_generator::config[2079]: No configuration found. Mar 17 17:41:26.232658 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:41:26.304327 systemd[1]: Reloading finished in 237 ms. Mar 17 17:41:26.342614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:26.345396 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:26.346309 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:41:26.347888 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:26.347928 systemd[1]: kubelet.service: Consumed 85ms CPU time, 90.2M memory peak. Mar 17 17:41:26.349411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:26.446085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:26.450434 (kubelet)[2126]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:41:26.493734 kubelet[2126]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:41:26.493734 kubelet[2126]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:41:26.493734 kubelet[2126]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:41:26.494127 kubelet[2126]: I0317 17:41:26.493794 2126 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:41:27.275895 kubelet[2126]: I0317 17:41:27.275128 2126 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:41:27.275895 kubelet[2126]: I0317 17:41:27.275161 2126 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:41:27.275895 kubelet[2126]: I0317 17:41:27.275407 2126 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:41:27.309027 kubelet[2126]: E0317 17:41:27.308991 2126 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:27.313838 kubelet[2126]: I0317 17:41:27.312542 2126 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:41:27.325370 kubelet[2126]: E0317 17:41:27.325333 2126 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:41:27.325963 kubelet[2126]: I0317 17:41:27.325582 2126 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:41:27.328894 kubelet[2126]: I0317 17:41:27.328843 2126 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:41:27.329598 kubelet[2126]: I0317 17:41:27.329536 2126 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:41:27.329768 kubelet[2126]: I0317 17:41:27.329584 2126 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:41:27.329841 kubelet[2126]: I0317 17:41:27.329829 2126 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:41:27.329841 kubelet[2126]: I0317 17:41:27.329838 2126 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:41:27.330070 kubelet[2126]: I0317 17:41:27.330034 2126 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:41:27.334931 kubelet[2126]: I0317 17:41:27.334867 2126 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:41:27.334931 kubelet[2126]: I0317 17:41:27.334899 2126 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:41:27.334931 kubelet[2126]: I0317 17:41:27.334926 2126 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:41:27.334931 kubelet[2126]: I0317 17:41:27.334938 2126 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:41:27.338657 kubelet[2126]: I0317 17:41:27.338281 2126 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:41:27.340670 kubelet[2126]: W0317 17:41:27.339385 2126 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 17 17:41:27.340670 kubelet[2126]: E0317 17:41:27.339450 2126 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:27.340670 kubelet[2126]: I0317 17:41:27.339807 2126 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:41:27.340670 kubelet[2126]: W0317 17:41:27.339934 2126 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:41:27.340670 kubelet[2126]: W0317 17:41:27.340552 2126 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 17 17:41:27.340670 kubelet[2126]: E0317 17:41:27.340595 2126 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:27.344644 kubelet[2126]: I0317 17:41:27.344616 2126 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:41:27.344736 kubelet[2126]: I0317 17:41:27.344660 2126 server.go:1287] "Started kubelet" Mar 17 17:41:27.345475 kubelet[2126]: I0317 17:41:27.344802 2126 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:41:27.346468 kubelet[2126]: I0317 17:41:27.345793 2126 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:41:27.346468 kubelet[2126]: I0317 17:41:27.345984 2126 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:41:27.346468 kubelet[2126]: I0317 17:41:27.346188 2126 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:41:27.346585 kubelet[2126]: I0317 17:41:27.346566 2126 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:41:27.346716 kubelet[2126]: I0317 17:41:27.346690 2126 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:41:27.346818 kubelet[2126]: I0317 17:41:27.346802 2126 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:41:27.346969 kubelet[2126]: E0317 17:41:27.346946 2126 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:27.349320 kubelet[2126]: I0317 17:41:27.347577 2126 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:41:27.349320 kubelet[2126]: I0317 17:41:27.347602 2126 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:41:27.349320 kubelet[2126]: E0317 17:41:27.347960 2126 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="200ms" Mar 17 17:41:27.349320 kubelet[2126]: W0317 17:41:27.348067 2126 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 17 17:41:27.349320 kubelet[2126]: E0317 17:41:27.348112 2126 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:27.349561 kubelet[2126]: I0317 17:41:27.349481 2126 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:41:27.349597 kubelet[2126]: I0317 17:41:27.349578 2126 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:41:27.350160 kubelet[2126]: E0317 17:41:27.349880 2126 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.85:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.85:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da7f1b009b77c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:41:27.344633724 +0000 UTC m=+0.890411707,LastTimestamp:2025-03-17 17:41:27.344633724 +0000 UTC m=+0.890411707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:41:27.351685 kubelet[2126]: I0317 17:41:27.351642 2126 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:41:27.360014 kubelet[2126]: E0317 17:41:27.359983 2126 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:41:27.367073 kubelet[2126]: I0317 17:41:27.367027 2126 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:41:27.368365 kubelet[2126]: I0317 17:41:27.368338 2126 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:41:27.368458 kubelet[2126]: I0317 17:41:27.368372 2126 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:41:27.368458 kubelet[2126]: I0317 17:41:27.368393 2126 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:41:27.369911 kubelet[2126]: I0317 17:41:27.368987 2126 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:41:27.369911 kubelet[2126]: I0317 17:41:27.369020 2126 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:41:27.369911 kubelet[2126]: I0317 17:41:27.369044 2126 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:41:27.369911 kubelet[2126]: I0317 17:41:27.369052 2126 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:41:27.369911 kubelet[2126]: E0317 17:41:27.369096 2126 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:41:27.371401 kubelet[2126]: W0317 17:41:27.371336 2126 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 17 17:41:27.371512 kubelet[2126]: E0317 17:41:27.371491 2126 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:27.448013 kubelet[2126]: E0317 17:41:27.447972 2126 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:27.469185 kubelet[2126]: E0317 17:41:27.469130 2126 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:41:27.548334 kubelet[2126]: E0317 17:41:27.548176 2126 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:27.548991 kubelet[2126]: E0317 17:41:27.548960 2126 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="400ms" Mar 17 17:41:27.648310 kubelet[2126]: E0317 17:41:27.648266 2126 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:27.666282 kubelet[2126]: I0317 17:41:27.666244 2126 policy_none.go:49] "None policy: Start" Mar 17 17:41:27.666282 kubelet[2126]: I0317 17:41:27.666279 2126 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:41:27.666375 kubelet[2126]: I0317 17:41:27.666312 2126 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:41:27.669388 kubelet[2126]: E0317 17:41:27.669335 2126 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:41:27.697365 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:41:27.714313 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:41:27.716751 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:41:27.727785 kubelet[2126]: I0317 17:41:27.727550 2126 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:41:27.727785 kubelet[2126]: I0317 17:41:27.727741 2126 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:41:27.727785 kubelet[2126]: I0317 17:41:27.727752 2126 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:41:27.728409 kubelet[2126]: I0317 17:41:27.728031 2126 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:41:27.728986 kubelet[2126]: E0317 17:41:27.728942 2126 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:41:27.729344 kubelet[2126]: E0317 17:41:27.729329 2126 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 17:41:27.829744 kubelet[2126]: I0317 17:41:27.829647 2126 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:41:27.830704 kubelet[2126]: E0317 17:41:27.830673 2126 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Mar 17 17:41:27.949818 kubelet[2126]: E0317 17:41:27.949782 2126 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="800ms" Mar 17 17:41:28.032034 kubelet[2126]: I0317 17:41:28.031970 2126 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:41:28.032325 kubelet[2126]: E0317 17:41:28.032292 2126 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Mar 17 17:41:28.079819 systemd[1]: Created slice kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice - libcontainer container kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice. Mar 17 17:41:28.095651 kubelet[2126]: E0317 17:41:28.095624 2126 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:41:28.098324 systemd[1]: Created slice kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice - libcontainer container kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice. Mar 17 17:41:28.100375 kubelet[2126]: E0317 17:41:28.100354 2126 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:41:28.101651 systemd[1]: Created slice kubepods-burstable-poddb6db8ea73bd81e6c49c5f23d3671f91.slice - libcontainer container kubepods-burstable-poddb6db8ea73bd81e6c49c5f23d3671f91.slice. Mar 17 17:41:28.103039 kubelet[2126]: E0317 17:41:28.102896 2126 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:41:28.152236 kubelet[2126]: I0317 17:41:28.152206 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db6db8ea73bd81e6c49c5f23d3671f91-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"db6db8ea73bd81e6c49c5f23d3671f91\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:41:28.152363 kubelet[2126]: I0317 17:41:28.152244 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:28.152363 kubelet[2126]: I0317 17:41:28.152262 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:28.152363 kubelet[2126]: I0317 17:41:28.152277 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:41:28.152363 kubelet[2126]: I0317 17:41:28.152291 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db6db8ea73bd81e6c49c5f23d3671f91-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"db6db8ea73bd81e6c49c5f23d3671f91\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:41:28.152363 kubelet[2126]: I0317 17:41:28.152305 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db6db8ea73bd81e6c49c5f23d3671f91-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"db6db8ea73bd81e6c49c5f23d3671f91\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:41:28.152470 kubelet[2126]: I0317 17:41:28.152319 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:28.152470 kubelet[2126]: I0317 17:41:28.152333 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:28.152470 kubelet[2126]: I0317 17:41:28.152347 2126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:28.384312 kubelet[2126]: W0317 17:41:28.384167 2126 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 17 17:41:28.384312 kubelet[2126]: E0317 17:41:28.384234 2126 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:28.397524 containerd[1452]: time="2025-03-17T17:41:28.397411423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,}" Mar 17 17:41:28.402091 containerd[1452]: time="2025-03-17T17:41:28.402006814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,}" Mar 17 17:41:28.404630 containerd[1452]: time="2025-03-17T17:41:28.404561647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:db6db8ea73bd81e6c49c5f23d3671f91,Namespace:kube-system,Attempt:0,}" Mar 17 17:41:28.433731 kubelet[2126]: I0317 17:41:28.433690 2126 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:41:28.434134 kubelet[2126]: E0317 17:41:28.434096 2126 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Mar 17 17:41:28.463076 kubelet[2126]: W0317 17:41:28.463004 2126 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 17 17:41:28.463178 kubelet[2126]: E0317 17:41:28.463079 2126 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:28.679785 kubelet[2126]: W0317 17:41:28.679722 2126 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 17 17:41:28.679785 kubelet[2126]: E0317 17:41:28.679785 2126 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:28.728005 kubelet[2126]: W0317 17:41:28.727887 2126 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 17 17:41:28.728005 kubelet[2126]: E0317 17:41:28.727969 2126 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:28.750670 kubelet[2126]: E0317 17:41:28.750614 2126 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="1.6s" Mar 17 17:41:28.916505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270766283.mount: Deactivated successfully. Mar 17 17:41:28.919654 containerd[1452]: time="2025-03-17T17:41:28.919600918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:41:28.922855 containerd[1452]: time="2025-03-17T17:41:28.922786198Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Mar 17 17:41:28.924480 containerd[1452]: time="2025-03-17T17:41:28.924439507Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:41:28.926507 containerd[1452]: time="2025-03-17T17:41:28.926469569Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:41:28.927576 containerd[1452]: time="2025-03-17T17:41:28.927540903Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:41:28.928885 containerd[1452]: time="2025-03-17T17:41:28.928825149Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:41:28.929430 containerd[1452]: time="2025-03-17T17:41:28.929386454Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:41:28.930188 containerd[1452]: time="2025-03-17T17:41:28.930115285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:41:28.933870 containerd[1452]: time="2025-03-17T17:41:28.931620456Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 534.129316ms" Mar 17 17:41:28.934969 containerd[1452]: time="2025-03-17T17:41:28.934899713Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 530.271088ms" Mar 17 17:41:28.943333 containerd[1452]: time="2025-03-17T17:41:28.943232555Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 541.146345ms" Mar 17 17:41:29.075772 containerd[1452]: time="2025-03-17T17:41:29.075293590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:41:29.075772 containerd[1452]: time="2025-03-17T17:41:29.075596980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:41:29.076115 containerd[1452]: time="2025-03-17T17:41:29.075970179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:41:29.076564 containerd[1452]: time="2025-03-17T17:41:29.076256548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:41:29.076564 containerd[1452]: time="2025-03-17T17:41:29.076378424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:29.076564 containerd[1452]: time="2025-03-17T17:41:29.076487164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:29.077020 containerd[1452]: time="2025-03-17T17:41:29.076810460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:29.077433 containerd[1452]: time="2025-03-17T17:41:29.077333332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:29.078062 containerd[1452]: time="2025-03-17T17:41:29.077987974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:41:29.078062 containerd[1452]: time="2025-03-17T17:41:29.078038959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:41:29.078166 containerd[1452]: time="2025-03-17T17:41:29.078050174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:29.079553 containerd[1452]: time="2025-03-17T17:41:29.079452016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:29.098081 systemd[1]: Started cri-containerd-775dd0246e6e5bead1d28a97819026838e5232776f111e04f46bb5a7edf6b515.scope - libcontainer container 775dd0246e6e5bead1d28a97819026838e5232776f111e04f46bb5a7edf6b515. Mar 17 17:41:29.102023 systemd[1]: Started cri-containerd-6aaee05572dd12297c362c7835879d8c574737ce7d905abfe76a9facd66a37ad.scope - libcontainer container 6aaee05572dd12297c362c7835879d8c574737ce7d905abfe76a9facd66a37ad. Mar 17 17:41:29.103104 systemd[1]: Started cri-containerd-778d61b6879a418a4691271c855bca2cd43d9debe0c82157825a1bfcc232c78a.scope - libcontainer container 778d61b6879a418a4691271c855bca2cd43d9debe0c82157825a1bfcc232c78a. Mar 17 17:41:29.138026 containerd[1452]: time="2025-03-17T17:41:29.137904447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6aaee05572dd12297c362c7835879d8c574737ce7d905abfe76a9facd66a37ad\"" Mar 17 17:41:29.141969 containerd[1452]: time="2025-03-17T17:41:29.141920811Z" level=info msg="CreateContainer within sandbox \"6aaee05572dd12297c362c7835879d8c574737ce7d905abfe76a9facd66a37ad\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:41:29.145349 containerd[1452]: time="2025-03-17T17:41:29.145262467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"775dd0246e6e5bead1d28a97819026838e5232776f111e04f46bb5a7edf6b515\"" Mar 17 17:41:29.146187 containerd[1452]: time="2025-03-17T17:41:29.146155455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:db6db8ea73bd81e6c49c5f23d3671f91,Namespace:kube-system,Attempt:0,} returns sandbox id \"778d61b6879a418a4691271c855bca2cd43d9debe0c82157825a1bfcc232c78a\"" Mar 17 17:41:29.148830 containerd[1452]: time="2025-03-17T17:41:29.148750832Z" level=info msg="CreateContainer within sandbox \"775dd0246e6e5bead1d28a97819026838e5232776f111e04f46bb5a7edf6b515\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:41:29.148992 containerd[1452]: time="2025-03-17T17:41:29.148839506Z" level=info msg="CreateContainer within sandbox \"778d61b6879a418a4691271c855bca2cd43d9debe0c82157825a1bfcc232c78a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:41:29.161974 containerd[1452]: time="2025-03-17T17:41:29.161927333Z" level=info msg="CreateContainer within sandbox \"6aaee05572dd12297c362c7835879d8c574737ce7d905abfe76a9facd66a37ad\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5c56f4525cfb586099a6299cad1d1e34781907d887360ae63ff357c2f351a699\"" Mar 17 17:41:29.162549 containerd[1452]: time="2025-03-17T17:41:29.162520896Z" level=info msg="StartContainer for \"5c56f4525cfb586099a6299cad1d1e34781907d887360ae63ff357c2f351a699\"" Mar 17 17:41:29.167763 containerd[1452]: time="2025-03-17T17:41:29.167724145Z" level=info msg="CreateContainer within sandbox \"775dd0246e6e5bead1d28a97819026838e5232776f111e04f46bb5a7edf6b515\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3520f25e81d994583d7664dfb191cae2410770ef6940fdf3c6955f974014ddd9\"" Mar 17 17:41:29.168481 containerd[1452]: time="2025-03-17T17:41:29.168452682Z" level=info msg="StartContainer for \"3520f25e81d994583d7664dfb191cae2410770ef6940fdf3c6955f974014ddd9\"" Mar 17 17:41:29.168946 containerd[1452]: time="2025-03-17T17:41:29.168918921Z" level=info msg="CreateContainer within sandbox \"778d61b6879a418a4691271c855bca2cd43d9debe0c82157825a1bfcc232c78a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a6e6b5dcadd9c7ac1a75649b2ddc4c0de97b80a0eaa6347a11059943093d163c\"" Mar 17 17:41:29.169254 containerd[1452]: time="2025-03-17T17:41:29.169227038Z" level=info msg="StartContainer for \"a6e6b5dcadd9c7ac1a75649b2ddc4c0de97b80a0eaa6347a11059943093d163c\"" Mar 17 17:41:29.190198 systemd[1]: Started cri-containerd-5c56f4525cfb586099a6299cad1d1e34781907d887360ae63ff357c2f351a699.scope - libcontainer container 5c56f4525cfb586099a6299cad1d1e34781907d887360ae63ff357c2f351a699. Mar 17 17:41:29.194690 systemd[1]: Started cri-containerd-3520f25e81d994583d7664dfb191cae2410770ef6940fdf3c6955f974014ddd9.scope - libcontainer container 3520f25e81d994583d7664dfb191cae2410770ef6940fdf3c6955f974014ddd9. Mar 17 17:41:29.195539 systemd[1]: Started cri-containerd-a6e6b5dcadd9c7ac1a75649b2ddc4c0de97b80a0eaa6347a11059943093d163c.scope - libcontainer container a6e6b5dcadd9c7ac1a75649b2ddc4c0de97b80a0eaa6347a11059943093d163c. Mar 17 17:41:29.235538 kubelet[2126]: I0317 17:41:29.235408 2126 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:41:29.242747 kubelet[2126]: E0317 17:41:29.236493 2126 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Mar 17 17:41:29.254542 containerd[1452]: time="2025-03-17T17:41:29.254405830Z" level=info msg="StartContainer for \"5c56f4525cfb586099a6299cad1d1e34781907d887360ae63ff357c2f351a699\" returns successfully" Mar 17 17:41:29.254542 containerd[1452]: time="2025-03-17T17:41:29.254444560Z" level=info msg="StartContainer for \"a6e6b5dcadd9c7ac1a75649b2ddc4c0de97b80a0eaa6347a11059943093d163c\" returns successfully" Mar 17 17:41:29.254542 containerd[1452]: time="2025-03-17T17:41:29.254504797Z" level=info msg="StartContainer for \"3520f25e81d994583d7664dfb191cae2410770ef6940fdf3c6955f974014ddd9\" returns successfully" Mar 17 17:41:29.396353 kubelet[2126]: E0317 17:41:29.395955 2126 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:41:29.401275 kubelet[2126]: E0317 17:41:29.401053 2126 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:41:29.403438 kubelet[2126]: E0317 17:41:29.403413 2126 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:41:29.435547 kubelet[2126]: E0317 17:41:29.434405 2126 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:30.405206 kubelet[2126]: E0317 17:41:30.405166 2126 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:41:30.407001 kubelet[2126]: E0317 17:41:30.406983 2126 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:41:30.839226 kubelet[2126]: I0317 17:41:30.839118 2126 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:41:31.409178 kubelet[2126]: E0317 17:41:31.409029 2126 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:41:31.455351 kubelet[2126]: E0317 17:41:31.455310 2126 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 17:41:31.526144 kubelet[2126]: I0317 17:41:31.525897 2126 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 17 17:41:31.547731 kubelet[2126]: I0317 17:41:31.547695 2126 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 17 17:41:31.604317 kubelet[2126]: E0317 17:41:31.604263 2126 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 17 17:41:31.604317 kubelet[2126]: I0317 17:41:31.604301 2126 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 17 17:41:31.606819 kubelet[2126]: E0317 17:41:31.606420 2126 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 17 17:41:31.606819 kubelet[2126]: I0317 17:41:31.606446 2126 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:31.608461 kubelet[2126]: E0317 17:41:31.608419 2126 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:32.337873 kubelet[2126]: I0317 17:41:32.337803 2126 apiserver.go:52] "Watching apiserver" Mar 17 17:41:32.348657 kubelet[2126]: I0317 17:41:32.348620 2126 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:41:33.436746 systemd[1]: Reload requested from client PID 2400 ('systemctl') (unit session-5.scope)... Mar 17 17:41:33.436762 systemd[1]: Reloading... Mar 17 17:41:33.504883 zram_generator::config[2444]: No configuration found. Mar 17 17:41:33.588248 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:41:33.669958 systemd[1]: Reloading finished in 232 ms. Mar 17 17:41:33.688181 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:33.704708 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:41:33.704975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:33.705027 systemd[1]: kubelet.service: Consumed 1.278s CPU time, 125.5M memory peak. Mar 17 17:41:33.718135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:33.828669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:33.832526 (kubelet)[2485]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:41:33.872684 kubelet[2485]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:41:33.872684 kubelet[2485]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:41:33.872684 kubelet[2485]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:41:33.873044 kubelet[2485]: I0317 17:41:33.872735 2485 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:41:33.881660 kubelet[2485]: I0317 17:41:33.881624 2485 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:41:33.881660 kubelet[2485]: I0317 17:41:33.881651 2485 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:41:33.882052 kubelet[2485]: I0317 17:41:33.882032 2485 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:41:33.883553 kubelet[2485]: I0317 17:41:33.883528 2485 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:41:33.886307 kubelet[2485]: I0317 17:41:33.886168 2485 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:41:33.889827 kubelet[2485]: E0317 17:41:33.889674 2485 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:41:33.889827 kubelet[2485]: I0317 17:41:33.889696 2485 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:41:33.892947 kubelet[2485]: I0317 17:41:33.892924 2485 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:41:33.893130 kubelet[2485]: I0317 17:41:33.893102 2485 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:41:33.893382 kubelet[2485]: I0317 17:41:33.893130 2485 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:41:33.893460 kubelet[2485]: I0317 17:41:33.893396 2485 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:41:33.893460 kubelet[2485]: I0317 17:41:33.893406 2485 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:41:33.893460 kubelet[2485]: I0317 17:41:33.893450 2485 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:41:33.893583 kubelet[2485]: I0317 17:41:33.893569 2485 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:41:33.893613 kubelet[2485]: I0317 17:41:33.893587 2485 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:41:33.893613 kubelet[2485]: I0317 17:41:33.893605 2485 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:41:33.893661 kubelet[2485]: I0317 17:41:33.893614 2485 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:41:33.895536 kubelet[2485]: I0317 17:41:33.895491 2485 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:41:33.896113 kubelet[2485]: I0317 17:41:33.896098 2485 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:41:33.899181 kubelet[2485]: I0317 17:41:33.899158 2485 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:41:33.899344 kubelet[2485]: I0317 17:41:33.899331 2485 server.go:1287] "Started kubelet" Mar 17 17:41:33.899622 kubelet[2485]: I0317 17:41:33.899554 2485 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:41:33.900334 kubelet[2485]: I0317 17:41:33.900320 2485 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:41:33.901302 kubelet[2485]: I0317 17:41:33.901144 2485 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:41:33.902427 kubelet[2485]: I0317 17:41:33.902406 2485 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:41:33.902680 kubelet[2485]: I0317 17:41:33.902659 2485 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:41:33.910179 kubelet[2485]: I0317 17:41:33.910142 2485 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:41:33.911411 kubelet[2485]: I0317 17:41:33.910472 2485 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:41:33.911778 kubelet[2485]: I0317 17:41:33.910485 2485 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:41:33.911778 kubelet[2485]: E0317 17:41:33.910595 2485 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:33.912166 kubelet[2485]: I0317 17:41:33.911912 2485 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:41:33.915170 kubelet[2485]: I0317 17:41:33.915141 2485 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:41:33.915317 kubelet[2485]: I0317 17:41:33.915244 2485 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:41:33.916595 kubelet[2485]: E0317 17:41:33.916555 2485 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:41:33.926327 kubelet[2485]: I0317 17:41:33.925917 2485 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:41:33.930053 kubelet[2485]: I0317 17:41:33.930015 2485 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:41:33.930872 kubelet[2485]: I0317 17:41:33.930827 2485 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:41:33.930919 kubelet[2485]: I0317 17:41:33.930877 2485 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:41:33.930919 kubelet[2485]: I0317 17:41:33.930900 2485 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:41:33.930919 kubelet[2485]: I0317 17:41:33.930908 2485 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:41:33.930983 kubelet[2485]: E0317 17:41:33.930946 2485 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:41:33.963556 kubelet[2485]: I0317 17:41:33.963460 2485 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:41:33.963556 kubelet[2485]: I0317 17:41:33.963483 2485 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:41:33.963556 kubelet[2485]: I0317 17:41:33.963504 2485 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:41:33.963689 kubelet[2485]: I0317 17:41:33.963647 2485 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:41:33.963689 kubelet[2485]: I0317 17:41:33.963659 2485 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:41:33.963689 kubelet[2485]: I0317 17:41:33.963675 2485 policy_none.go:49] "None policy: Start" Mar 17 17:41:33.963689 kubelet[2485]: I0317 17:41:33.963683 2485 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:41:33.963689 kubelet[2485]: I0317 17:41:33.963692 2485 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:41:33.963792 kubelet[2485]: I0317 17:41:33.963777 2485 state_mem.go:75] "Updated machine memory state" Mar 17 17:41:33.967942 kubelet[2485]: I0317 17:41:33.967667 2485 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:41:33.967942 kubelet[2485]: I0317 17:41:33.967814 2485 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:41:33.968081 kubelet[2485]: I0317 17:41:33.968038 2485 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:41:33.968307 kubelet[2485]: I0317 17:41:33.968291 2485 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:41:33.968707 kubelet[2485]: E0317 17:41:33.968682 2485 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:41:34.031868 kubelet[2485]: I0317 17:41:34.031600 2485 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 17 17:41:34.031868 kubelet[2485]: I0317 17:41:34.031676 2485 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:34.032322 kubelet[2485]: I0317 17:41:34.032130 2485 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 17 17:41:34.069583 kubelet[2485]: I0317 17:41:34.069555 2485 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:41:34.076920 kubelet[2485]: I0317 17:41:34.076896 2485 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Mar 17 17:41:34.077545 kubelet[2485]: I0317 17:41:34.077069 2485 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 17 17:41:34.213689 kubelet[2485]: I0317 17:41:34.213567 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:34.213689 kubelet[2485]: I0317 17:41:34.213626 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db6db8ea73bd81e6c49c5f23d3671f91-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"db6db8ea73bd81e6c49c5f23d3671f91\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:41:34.213689 kubelet[2485]: I0317 17:41:34.213660 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db6db8ea73bd81e6c49c5f23d3671f91-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"db6db8ea73bd81e6c49c5f23d3671f91\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:41:34.213689 kubelet[2485]: I0317 17:41:34.213680 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:34.213885 kubelet[2485]: I0317 17:41:34.213701 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:34.213885 kubelet[2485]: I0317 17:41:34.213718 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:41:34.213885 kubelet[2485]: I0317 17:41:34.213735 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db6db8ea73bd81e6c49c5f23d3671f91-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"db6db8ea73bd81e6c49c5f23d3671f91\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:41:34.213885 kubelet[2485]: I0317 17:41:34.213766 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:34.213885 kubelet[2485]: I0317 17:41:34.213791 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:34.894876 kubelet[2485]: I0317 17:41:34.894819 2485 apiserver.go:52] "Watching apiserver" Mar 17 17:41:34.911934 kubelet[2485]: I0317 17:41:34.911901 2485 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:41:34.947951 kubelet[2485]: I0317 17:41:34.947918 2485 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 17 17:41:34.953473 kubelet[2485]: E0317 17:41:34.953429 2485 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 17:41:34.970672 kubelet[2485]: I0317 17:41:34.970605 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.970590072 podStartE2EDuration="970.590072ms" podCreationTimestamp="2025-03-17 17:41:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:41:34.97034359 +0000 UTC m=+1.135140504" watchObservedRunningTime="2025-03-17 17:41:34.970590072 +0000 UTC m=+1.135386986" Mar 17 17:41:34.990375 kubelet[2485]: I0317 17:41:34.990322 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.990304721 podStartE2EDuration="990.304721ms" podCreationTimestamp="2025-03-17 17:41:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:41:34.982874858 +0000 UTC m=+1.147671812" watchObservedRunningTime="2025-03-17 17:41:34.990304721 +0000 UTC m=+1.155101675" Mar 17 17:41:34.998342 kubelet[2485]: I0317 17:41:34.997883 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.997866831 podStartE2EDuration="997.866831ms" podCreationTimestamp="2025-03-17 17:41:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:41:34.990381772 +0000 UTC m=+1.155178726" watchObservedRunningTime="2025-03-17 17:41:34.997866831 +0000 UTC m=+1.162663785" Mar 17 17:41:35.268032 sudo[1595]: pam_unix(sudo:session): session closed for user root Mar 17 17:41:35.269306 sshd[1594]: Connection closed by 10.0.0.1 port 51130 Mar 17 17:41:35.269719 sshd-session[1591]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:35.273134 systemd[1]: sshd@4-10.0.0.85:22-10.0.0.1:51130.service: Deactivated successfully. Mar 17 17:41:35.276412 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:41:35.276699 systemd[1]: session-5.scope: Consumed 4.765s CPU time, 229.4M memory peak. Mar 17 17:41:35.277926 systemd-logind[1434]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:41:35.278804 systemd-logind[1434]: Removed session 5. Mar 17 17:41:38.475815 kubelet[2485]: I0317 17:41:38.475787 2485 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:41:38.476729 containerd[1452]: time="2025-03-17T17:41:38.476551568Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:41:38.477028 kubelet[2485]: I0317 17:41:38.476735 2485 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:41:39.363011 systemd[1]: Created slice kubepods-besteffort-pod6dabda47_483e_433d_ac02_b4056912f563.slice - libcontainer container kubepods-besteffort-pod6dabda47_483e_433d_ac02_b4056912f563.slice. Mar 17 17:41:39.378017 systemd[1]: Created slice kubepods-burstable-pod54403657_6478_44aa_b18b_c9b756bdf53f.slice - libcontainer container kubepods-burstable-pod54403657_6478_44aa_b18b_c9b756bdf53f.slice. Mar 17 17:41:39.449324 kubelet[2485]: I0317 17:41:39.449292 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/54403657-6478-44aa-b18b-c9b756bdf53f-cni\") pod \"kube-flannel-ds-7jdqb\" (UID: \"54403657-6478-44aa-b18b-c9b756bdf53f\") " pod="kube-flannel/kube-flannel-ds-7jdqb" Mar 17 17:41:39.449433 kubelet[2485]: I0317 17:41:39.449330 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/54403657-6478-44aa-b18b-c9b756bdf53f-cni-plugin\") pod \"kube-flannel-ds-7jdqb\" (UID: \"54403657-6478-44aa-b18b-c9b756bdf53f\") " pod="kube-flannel/kube-flannel-ds-7jdqb" Mar 17 17:41:39.449433 kubelet[2485]: I0317 17:41:39.449351 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6dabda47-483e-433d-ac02-b4056912f563-kube-proxy\") pod \"kube-proxy-4k8gc\" (UID: \"6dabda47-483e-433d-ac02-b4056912f563\") " pod="kube-system/kube-proxy-4k8gc" Mar 17 17:41:39.449433 kubelet[2485]: I0317 17:41:39.449374 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6dabda47-483e-433d-ac02-b4056912f563-xtables-lock\") pod \"kube-proxy-4k8gc\" (UID: \"6dabda47-483e-433d-ac02-b4056912f563\") " pod="kube-system/kube-proxy-4k8gc" Mar 17 17:41:39.449433 kubelet[2485]: I0317 17:41:39.449390 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ztxw\" (UniqueName: \"kubernetes.io/projected/6dabda47-483e-433d-ac02-b4056912f563-kube-api-access-5ztxw\") pod \"kube-proxy-4k8gc\" (UID: \"6dabda47-483e-433d-ac02-b4056912f563\") " pod="kube-system/kube-proxy-4k8gc" Mar 17 17:41:39.449433 kubelet[2485]: I0317 17:41:39.449408 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6dabda47-483e-433d-ac02-b4056912f563-lib-modules\") pod \"kube-proxy-4k8gc\" (UID: \"6dabda47-483e-433d-ac02-b4056912f563\") " pod="kube-system/kube-proxy-4k8gc" Mar 17 17:41:39.449548 kubelet[2485]: I0317 17:41:39.449440 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/54403657-6478-44aa-b18b-c9b756bdf53f-run\") pod \"kube-flannel-ds-7jdqb\" (UID: \"54403657-6478-44aa-b18b-c9b756bdf53f\") " pod="kube-flannel/kube-flannel-ds-7jdqb" Mar 17 17:41:39.449895 kubelet[2485]: I0317 17:41:39.449650 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54403657-6478-44aa-b18b-c9b756bdf53f-xtables-lock\") pod \"kube-flannel-ds-7jdqb\" (UID: \"54403657-6478-44aa-b18b-c9b756bdf53f\") " pod="kube-flannel/kube-flannel-ds-7jdqb" Mar 17 17:41:39.450207 kubelet[2485]: I0317 17:41:39.449929 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4jlv\" (UniqueName: \"kubernetes.io/projected/54403657-6478-44aa-b18b-c9b756bdf53f-kube-api-access-q4jlv\") pod \"kube-flannel-ds-7jdqb\" (UID: \"54403657-6478-44aa-b18b-c9b756bdf53f\") " pod="kube-flannel/kube-flannel-ds-7jdqb" Mar 17 17:41:39.450207 kubelet[2485]: I0317 17:41:39.450043 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/54403657-6478-44aa-b18b-c9b756bdf53f-flannel-cfg\") pod \"kube-flannel-ds-7jdqb\" (UID: \"54403657-6478-44aa-b18b-c9b756bdf53f\") " pod="kube-flannel/kube-flannel-ds-7jdqb" Mar 17 17:41:39.676565 containerd[1452]: time="2025-03-17T17:41:39.676519521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4k8gc,Uid:6dabda47-483e-433d-ac02-b4056912f563,Namespace:kube-system,Attempt:0,}" Mar 17 17:41:39.681764 containerd[1452]: time="2025-03-17T17:41:39.681729135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-7jdqb,Uid:54403657-6478-44aa-b18b-c9b756bdf53f,Namespace:kube-flannel,Attempt:0,}" Mar 17 17:41:39.695961 containerd[1452]: time="2025-03-17T17:41:39.695537716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:41:39.696086 containerd[1452]: time="2025-03-17T17:41:39.695954362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:41:39.696086 containerd[1452]: time="2025-03-17T17:41:39.695969890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:39.696086 containerd[1452]: time="2025-03-17T17:41:39.696053531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:39.704008 containerd[1452]: time="2025-03-17T17:41:39.703927261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:41:39.704008 containerd[1452]: time="2025-03-17T17:41:39.703978486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:41:39.704351 containerd[1452]: time="2025-03-17T17:41:39.703993293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:39.704351 containerd[1452]: time="2025-03-17T17:41:39.704208920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:39.716043 systemd[1]: Started cri-containerd-c57b1ece207ff790ab2fc132c340451be366ab27c8be05563048c0cd16f4f55c.scope - libcontainer container c57b1ece207ff790ab2fc132c340451be366ab27c8be05563048c0cd16f4f55c. Mar 17 17:41:39.718764 systemd[1]: Started cri-containerd-5057b9c4eafde395ce926adcf800e626075f2a8a4dc88da83a1d02cc116e3e54.scope - libcontainer container 5057b9c4eafde395ce926adcf800e626075f2a8a4dc88da83a1d02cc116e3e54. Mar 17 17:41:39.736149 containerd[1452]: time="2025-03-17T17:41:39.736108118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4k8gc,Uid:6dabda47-483e-433d-ac02-b4056912f563,Namespace:kube-system,Attempt:0,} returns sandbox id \"c57b1ece207ff790ab2fc132c340451be366ab27c8be05563048c0cd16f4f55c\"" Mar 17 17:41:39.741035 containerd[1452]: time="2025-03-17T17:41:39.740996093Z" level=info msg="CreateContainer within sandbox \"c57b1ece207ff790ab2fc132c340451be366ab27c8be05563048c0cd16f4f55c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:41:39.751804 containerd[1452]: time="2025-03-17T17:41:39.751768575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-7jdqb,Uid:54403657-6478-44aa-b18b-c9b756bdf53f,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"5057b9c4eafde395ce926adcf800e626075f2a8a4dc88da83a1d02cc116e3e54\"" Mar 17 17:41:39.753894 containerd[1452]: time="2025-03-17T17:41:39.753263593Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Mar 17 17:41:39.754612 containerd[1452]: time="2025-03-17T17:41:39.754563836Z" level=info msg="CreateContainer within sandbox \"c57b1ece207ff790ab2fc132c340451be366ab27c8be05563048c0cd16f4f55c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1cc5badebc438f927376d8123a2dd9b7b9a69e7264903ce03974f73a98caa0bc\"" Mar 17 17:41:39.755650 containerd[1452]: time="2025-03-17T17:41:39.755620878Z" level=info msg="StartContainer for \"1cc5badebc438f927376d8123a2dd9b7b9a69e7264903ce03974f73a98caa0bc\"" Mar 17 17:41:39.780035 systemd[1]: Started cri-containerd-1cc5badebc438f927376d8123a2dd9b7b9a69e7264903ce03974f73a98caa0bc.scope - libcontainer container 1cc5badebc438f927376d8123a2dd9b7b9a69e7264903ce03974f73a98caa0bc. Mar 17 17:41:39.811319 containerd[1452]: time="2025-03-17T17:41:39.811264846Z" level=info msg="StartContainer for \"1cc5badebc438f927376d8123a2dd9b7b9a69e7264903ce03974f73a98caa0bc\" returns successfully" Mar 17 17:41:39.964811 kubelet[2485]: I0317 17:41:39.964660 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4k8gc" podStartSLOduration=0.964643496 podStartE2EDuration="964.643496ms" podCreationTimestamp="2025-03-17 17:41:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:41:39.964321657 +0000 UTC m=+6.129118611" watchObservedRunningTime="2025-03-17 17:41:39.964643496 +0000 UTC m=+6.129440450" Mar 17 17:41:40.978972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1820458861.mount: Deactivated successfully. Mar 17 17:41:41.004343 containerd[1452]: time="2025-03-17T17:41:41.004289098Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:41.004711 containerd[1452]: time="2025-03-17T17:41:41.004599715Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Mar 17 17:41:41.005426 containerd[1452]: time="2025-03-17T17:41:41.005396708Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:41.007460 containerd[1452]: time="2025-03-17T17:41:41.007425364Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:41.009115 containerd[1452]: time="2025-03-17T17:41:41.009088180Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.25577132s" Mar 17 17:41:41.009195 containerd[1452]: time="2025-03-17T17:41:41.009118113Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Mar 17 17:41:41.012059 containerd[1452]: time="2025-03-17T17:41:41.012030400Z" level=info msg="CreateContainer within sandbox \"5057b9c4eafde395ce926adcf800e626075f2a8a4dc88da83a1d02cc116e3e54\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 17 17:41:41.020323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount638258780.mount: Deactivated successfully. Mar 17 17:41:41.020448 containerd[1452]: time="2025-03-17T17:41:41.020406903Z" level=info msg="CreateContainer within sandbox \"5057b9c4eafde395ce926adcf800e626075f2a8a4dc88da83a1d02cc116e3e54\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"b60f40faaaa5d5ca433fd01dc9147043d7d2510120bf5b5d7d20798a24b52977\"" Mar 17 17:41:41.021086 containerd[1452]: time="2025-03-17T17:41:41.021059592Z" level=info msg="StartContainer for \"b60f40faaaa5d5ca433fd01dc9147043d7d2510120bf5b5d7d20798a24b52977\"" Mar 17 17:41:41.045995 systemd[1]: Started cri-containerd-b60f40faaaa5d5ca433fd01dc9147043d7d2510120bf5b5d7d20798a24b52977.scope - libcontainer container b60f40faaaa5d5ca433fd01dc9147043d7d2510120bf5b5d7d20798a24b52977. Mar 17 17:41:41.065715 containerd[1452]: time="2025-03-17T17:41:41.065665912Z" level=info msg="StartContainer for \"b60f40faaaa5d5ca433fd01dc9147043d7d2510120bf5b5d7d20798a24b52977\" returns successfully" Mar 17 17:41:41.077499 systemd[1]: cri-containerd-b60f40faaaa5d5ca433fd01dc9147043d7d2510120bf5b5d7d20798a24b52977.scope: Deactivated successfully. Mar 17 17:41:41.114387 containerd[1452]: time="2025-03-17T17:41:41.114312018Z" level=info msg="shim disconnected" id=b60f40faaaa5d5ca433fd01dc9147043d7d2510120bf5b5d7d20798a24b52977 namespace=k8s.io Mar 17 17:41:41.114387 containerd[1452]: time="2025-03-17T17:41:41.114366562Z" level=warning msg="cleaning up after shim disconnected" id=b60f40faaaa5d5ca433fd01dc9147043d7d2510120bf5b5d7d20798a24b52977 namespace=k8s.io Mar 17 17:41:41.114387 containerd[1452]: time="2025-03-17T17:41:41.114377567Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:41:41.962495 containerd[1452]: time="2025-03-17T17:41:41.962450049Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Mar 17 17:41:43.028301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2011765490.mount: Deactivated successfully. Mar 17 17:41:44.193489 containerd[1452]: time="2025-03-17T17:41:44.193438008Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:44.193947 containerd[1452]: time="2025-03-17T17:41:44.193896821Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Mar 17 17:41:44.194927 containerd[1452]: time="2025-03-17T17:41:44.194891354Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:44.197940 containerd[1452]: time="2025-03-17T17:41:44.197880958Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:44.199148 containerd[1452]: time="2025-03-17T17:41:44.199116382Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 2.236625075s" Mar 17 17:41:44.199148 containerd[1452]: time="2025-03-17T17:41:44.199148234Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Mar 17 17:41:44.202139 containerd[1452]: time="2025-03-17T17:41:44.202105785Z" level=info msg="CreateContainer within sandbox \"5057b9c4eafde395ce926adcf800e626075f2a8a4dc88da83a1d02cc116e3e54\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:41:44.214246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1510237265.mount: Deactivated successfully. Mar 17 17:41:44.214736 containerd[1452]: time="2025-03-17T17:41:44.214364791Z" level=info msg="CreateContainer within sandbox \"5057b9c4eafde395ce926adcf800e626075f2a8a4dc88da83a1d02cc116e3e54\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fbdc52786cfea03559cc3aca62c041a2a535d14728961552082c43069c7e5c2b\"" Mar 17 17:41:44.215330 containerd[1452]: time="2025-03-17T17:41:44.215108431Z" level=info msg="StartContainer for \"fbdc52786cfea03559cc3aca62c041a2a535d14728961552082c43069c7e5c2b\"" Mar 17 17:41:44.250008 systemd[1]: Started cri-containerd-fbdc52786cfea03559cc3aca62c041a2a535d14728961552082c43069c7e5c2b.scope - libcontainer container fbdc52786cfea03559cc3aca62c041a2a535d14728961552082c43069c7e5c2b. Mar 17 17:41:44.271168 containerd[1452]: time="2025-03-17T17:41:44.271128840Z" level=info msg="StartContainer for \"fbdc52786cfea03559cc3aca62c041a2a535d14728961552082c43069c7e5c2b\" returns successfully" Mar 17 17:41:44.283841 systemd[1]: cri-containerd-fbdc52786cfea03559cc3aca62c041a2a535d14728961552082c43069c7e5c2b.scope: Deactivated successfully. Mar 17 17:41:44.300834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbdc52786cfea03559cc3aca62c041a2a535d14728961552082c43069c7e5c2b-rootfs.mount: Deactivated successfully. Mar 17 17:41:44.326304 kubelet[2485]: I0317 17:41:44.326141 2485 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 17:41:44.362180 systemd[1]: Created slice kubepods-burstable-pod6e0be397_1bb3_4c4a_8d88_7b8e030f37cc.slice - libcontainer container kubepods-burstable-pod6e0be397_1bb3_4c4a_8d88_7b8e030f37cc.slice. Mar 17 17:41:44.369760 systemd[1]: Created slice kubepods-burstable-pod699bf498_a6e8_47ce_8473_64d6d6134d04.slice - libcontainer container kubepods-burstable-pod699bf498_a6e8_47ce_8473_64d6d6134d04.slice. Mar 17 17:41:44.383034 kubelet[2485]: I0317 17:41:44.382983 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/699bf498-a6e8-47ce-8473-64d6d6134d04-config-volume\") pod \"coredns-668d6bf9bc-q7fsj\" (UID: \"699bf498-a6e8-47ce-8473-64d6d6134d04\") " pod="kube-system/coredns-668d6bf9bc-q7fsj" Mar 17 17:41:44.383034 kubelet[2485]: I0317 17:41:44.383033 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e0be397-1bb3-4c4a-8d88-7b8e030f37cc-config-volume\") pod \"coredns-668d6bf9bc-4vnkr\" (UID: \"6e0be397-1bb3-4c4a-8d88-7b8e030f37cc\") " pod="kube-system/coredns-668d6bf9bc-4vnkr" Mar 17 17:41:44.383178 kubelet[2485]: I0317 17:41:44.383053 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knl24\" (UniqueName: \"kubernetes.io/projected/699bf498-a6e8-47ce-8473-64d6d6134d04-kube-api-access-knl24\") pod \"coredns-668d6bf9bc-q7fsj\" (UID: \"699bf498-a6e8-47ce-8473-64d6d6134d04\") " pod="kube-system/coredns-668d6bf9bc-q7fsj" Mar 17 17:41:44.383178 kubelet[2485]: I0317 17:41:44.383073 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbdwj\" (UniqueName: \"kubernetes.io/projected/6e0be397-1bb3-4c4a-8d88-7b8e030f37cc-kube-api-access-tbdwj\") pod \"coredns-668d6bf9bc-4vnkr\" (UID: \"6e0be397-1bb3-4c4a-8d88-7b8e030f37cc\") " pod="kube-system/coredns-668d6bf9bc-4vnkr" Mar 17 17:41:44.402408 containerd[1452]: time="2025-03-17T17:41:44.402341103Z" level=info msg="shim disconnected" id=fbdc52786cfea03559cc3aca62c041a2a535d14728961552082c43069c7e5c2b namespace=k8s.io Mar 17 17:41:44.402408 containerd[1452]: time="2025-03-17T17:41:44.402400646Z" level=warning msg="cleaning up after shim disconnected" id=fbdc52786cfea03559cc3aca62c041a2a535d14728961552082c43069c7e5c2b namespace=k8s.io Mar 17 17:41:44.402408 containerd[1452]: time="2025-03-17T17:41:44.402408969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:41:44.667320 containerd[1452]: time="2025-03-17T17:41:44.667275132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4vnkr,Uid:6e0be397-1bb3-4c4a-8d88-7b8e030f37cc,Namespace:kube-system,Attempt:0,}" Mar 17 17:41:44.672852 containerd[1452]: time="2025-03-17T17:41:44.672813613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q7fsj,Uid:699bf498-a6e8-47ce-8473-64d6d6134d04,Namespace:kube-system,Attempt:0,}" Mar 17 17:41:44.753144 containerd[1452]: time="2025-03-17T17:41:44.753096579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4vnkr,Uid:6e0be397-1bb3-4c4a-8d88-7b8e030f37cc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d58209601ebb134dbd725c4c59313b5eb3e54c150e709ea406fed15729e3542\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:41:44.753736 kubelet[2485]: E0317 17:41:44.753688 2485 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d58209601ebb134dbd725c4c59313b5eb3e54c150e709ea406fed15729e3542\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:41:44.753839 kubelet[2485]: E0317 17:41:44.753773 2485 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d58209601ebb134dbd725c4c59313b5eb3e54c150e709ea406fed15729e3542\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-4vnkr" Mar 17 17:41:44.753839 kubelet[2485]: E0317 17:41:44.753792 2485 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d58209601ebb134dbd725c4c59313b5eb3e54c150e709ea406fed15729e3542\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-4vnkr" Mar 17 17:41:44.754270 kubelet[2485]: E0317 17:41:44.753835 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4vnkr_kube-system(6e0be397-1bb3-4c4a-8d88-7b8e030f37cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4vnkr_kube-system(6e0be397-1bb3-4c4a-8d88-7b8e030f37cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d58209601ebb134dbd725c4c59313b5eb3e54c150e709ea406fed15729e3542\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-4vnkr" podUID="6e0be397-1bb3-4c4a-8d88-7b8e030f37cc" Mar 17 17:41:44.765268 containerd[1452]: time="2025-03-17T17:41:44.765187042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q7fsj,Uid:699bf498-a6e8-47ce-8473-64d6d6134d04,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"714c6e08368a35e6afdb25b72cd040344b439f193f3860e6d5bbce1a652ad2e3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:41:44.766065 kubelet[2485]: E0317 17:41:44.765899 2485 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"714c6e08368a35e6afdb25b72cd040344b439f193f3860e6d5bbce1a652ad2e3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:41:44.766065 kubelet[2485]: E0317 17:41:44.765949 2485 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"714c6e08368a35e6afdb25b72cd040344b439f193f3860e6d5bbce1a652ad2e3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-q7fsj" Mar 17 17:41:44.766065 kubelet[2485]: E0317 17:41:44.765967 2485 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"714c6e08368a35e6afdb25b72cd040344b439f193f3860e6d5bbce1a652ad2e3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-q7fsj" Mar 17 17:41:44.766065 kubelet[2485]: E0317 17:41:44.766000 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-q7fsj_kube-system(699bf498-a6e8-47ce-8473-64d6d6134d04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-q7fsj_kube-system(699bf498-a6e8-47ce-8473-64d6d6134d04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"714c6e08368a35e6afdb25b72cd040344b439f193f3860e6d5bbce1a652ad2e3\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-q7fsj" podUID="699bf498-a6e8-47ce-8473-64d6d6134d04" Mar 17 17:41:44.970909 containerd[1452]: time="2025-03-17T17:41:44.970154978Z" level=info msg="CreateContainer within sandbox \"5057b9c4eafde395ce926adcf800e626075f2a8a4dc88da83a1d02cc116e3e54\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 17 17:41:44.989300 containerd[1452]: time="2025-03-17T17:41:44.989248073Z" level=info msg="CreateContainer within sandbox \"5057b9c4eafde395ce926adcf800e626075f2a8a4dc88da83a1d02cc116e3e54\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"adf85ae3680d317c7fabe7bdc4ad81c498e494107b27d75a07c390b9d8fe0383\"" Mar 17 17:41:44.990512 containerd[1452]: time="2025-03-17T17:41:44.989994233Z" level=info msg="StartContainer for \"adf85ae3680d317c7fabe7bdc4ad81c498e494107b27d75a07c390b9d8fe0383\"" Mar 17 17:41:45.015018 systemd[1]: Started cri-containerd-adf85ae3680d317c7fabe7bdc4ad81c498e494107b27d75a07c390b9d8fe0383.scope - libcontainer container adf85ae3680d317c7fabe7bdc4ad81c498e494107b27d75a07c390b9d8fe0383. Mar 17 17:41:45.042302 containerd[1452]: time="2025-03-17T17:41:45.042181000Z" level=info msg="StartContainer for \"adf85ae3680d317c7fabe7bdc4ad81c498e494107b27d75a07c390b9d8fe0383\" returns successfully" Mar 17 17:41:45.981704 kubelet[2485]: I0317 17:41:45.981441 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-7jdqb" podStartSLOduration=2.533968672 podStartE2EDuration="6.981422048s" podCreationTimestamp="2025-03-17 17:41:39 +0000 UTC" firstStartedPulling="2025-03-17 17:41:39.752699915 +0000 UTC m=+5.917496869" lastFinishedPulling="2025-03-17 17:41:44.200153291 +0000 UTC m=+10.364950245" observedRunningTime="2025-03-17 17:41:45.981229939 +0000 UTC m=+12.146026853" watchObservedRunningTime="2025-03-17 17:41:45.981422048 +0000 UTC m=+12.146219002" Mar 17 17:41:46.140509 systemd-networkd[1367]: flannel.1: Link UP Mar 17 17:41:46.140516 systemd-networkd[1367]: flannel.1: Gained carrier Mar 17 17:41:47.343994 systemd-networkd[1367]: flannel.1: Gained IPv6LL Mar 17 17:41:49.141893 update_engine[1437]: I20250317 17:41:49.141438 1437 update_attempter.cc:509] Updating boot flags... Mar 17 17:41:49.164886 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3148) Mar 17 17:41:49.205167 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3149) Mar 17 17:41:49.244904 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3149) Mar 17 17:41:55.932683 containerd[1452]: time="2025-03-17T17:41:55.932636966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q7fsj,Uid:699bf498-a6e8-47ce-8473-64d6d6134d04,Namespace:kube-system,Attempt:0,}" Mar 17 17:41:55.964234 systemd-networkd[1367]: cni0: Link UP Mar 17 17:41:55.964240 systemd-networkd[1367]: cni0: Gained carrier Mar 17 17:41:55.967515 systemd-networkd[1367]: cni0: Lost carrier Mar 17 17:41:55.971116 kernel: cni0: port 1(veth4166f0cd) entered blocking state Mar 17 17:41:55.971179 kernel: cni0: port 1(veth4166f0cd) entered disabled state Mar 17 17:41:55.971197 kernel: veth4166f0cd: entered allmulticast mode Mar 17 17:41:55.971213 kernel: veth4166f0cd: entered promiscuous mode Mar 17 17:41:55.971957 kernel: cni0: port 1(veth4166f0cd) entered blocking state Mar 17 17:41:55.971997 kernel: cni0: port 1(veth4166f0cd) entered forwarding state Mar 17 17:41:55.973762 kernel: cni0: port 1(veth4166f0cd) entered disabled state Mar 17 17:41:55.973814 systemd-networkd[1367]: veth4166f0cd: Link UP Mar 17 17:41:55.987072 kernel: cni0: port 1(veth4166f0cd) entered blocking state Mar 17 17:41:55.987129 kernel: cni0: port 1(veth4166f0cd) entered forwarding state Mar 17 17:41:55.987086 systemd-networkd[1367]: veth4166f0cd: Gained carrier Mar 17 17:41:55.987338 systemd-networkd[1367]: cni0: Gained carrier Mar 17 17:41:55.989425 containerd[1452]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"} Mar 17 17:41:55.989425 containerd[1452]: delegateAdd: netconf sent to delegate plugin: Mar 17 17:41:56.006983 containerd[1452]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-03-17T17:41:56.006514247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:41:56.006983 containerd[1452]: time="2025-03-17T17:41:56.006885965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:41:56.006983 containerd[1452]: time="2025-03-17T17:41:56.006897487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:56.007214 containerd[1452]: time="2025-03-17T17:41:56.007158302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:56.031054 systemd[1]: Started cri-containerd-a5ca0d5e4311dc638112e4f5fda34796aaff02c1445993370c20db6aa30a2082.scope - libcontainer container a5ca0d5e4311dc638112e4f5fda34796aaff02c1445993370c20db6aa30a2082. Mar 17 17:41:56.041110 systemd-resolved[1371]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:41:56.058901 containerd[1452]: time="2025-03-17T17:41:56.058817903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q7fsj,Uid:699bf498-a6e8-47ce-8473-64d6d6134d04,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5ca0d5e4311dc638112e4f5fda34796aaff02c1445993370c20db6aa30a2082\"" Mar 17 17:41:56.062065 containerd[1452]: time="2025-03-17T17:41:56.062035533Z" level=info msg="CreateContainer within sandbox \"a5ca0d5e4311dc638112e4f5fda34796aaff02c1445993370c20db6aa30a2082\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:41:56.073614 containerd[1452]: time="2025-03-17T17:41:56.073577858Z" level=info msg="CreateContainer within sandbox \"a5ca0d5e4311dc638112e4f5fda34796aaff02c1445993370c20db6aa30a2082\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6738889223718786a64ef949355f1a4f7a2579c8182f918577032b24377d0367\"" Mar 17 17:41:56.074287 containerd[1452]: time="2025-03-17T17:41:56.074254759Z" level=info msg="StartContainer for \"6738889223718786a64ef949355f1a4f7a2579c8182f918577032b24377d0367\"" Mar 17 17:41:56.104061 systemd[1]: Started cri-containerd-6738889223718786a64ef949355f1a4f7a2579c8182f918577032b24377d0367.scope - libcontainer container 6738889223718786a64ef949355f1a4f7a2579c8182f918577032b24377d0367. Mar 17 17:41:56.130409 containerd[1452]: time="2025-03-17T17:41:56.130328600Z" level=info msg="StartContainer for \"6738889223718786a64ef949355f1a4f7a2579c8182f918577032b24377d0367\" returns successfully" Mar 17 17:41:56.956026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount304156869.mount: Deactivated successfully. Mar 17 17:41:57.005637 kubelet[2485]: I0317 17:41:57.005566 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-q7fsj" podStartSLOduration=18.005552322 podStartE2EDuration="18.005552322s" podCreationTimestamp="2025-03-17 17:41:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:41:57.005397371 +0000 UTC m=+23.170194325" watchObservedRunningTime="2025-03-17 17:41:57.005552322 +0000 UTC m=+23.170349276" Mar 17 17:41:57.715907 systemd-networkd[1367]: cni0: Gained IPv6LL Mar 17 17:41:57.839967 systemd-networkd[1367]: veth4166f0cd: Gained IPv6LL Mar 17 17:41:58.932251 containerd[1452]: time="2025-03-17T17:41:58.932202139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4vnkr,Uid:6e0be397-1bb3-4c4a-8d88-7b8e030f37cc,Namespace:kube-system,Attempt:0,}" Mar 17 17:41:58.954933 systemd-networkd[1367]: veth7b87149a: Link UP Mar 17 17:41:58.958086 kernel: cni0: port 2(veth7b87149a) entered blocking state Mar 17 17:41:58.958164 kernel: cni0: port 2(veth7b87149a) entered disabled state Mar 17 17:41:58.959230 kernel: veth7b87149a: entered allmulticast mode Mar 17 17:41:58.959278 kernel: veth7b87149a: entered promiscuous mode Mar 17 17:41:58.970743 systemd-networkd[1367]: veth7b87149a: Gained carrier Mar 17 17:41:58.970858 kernel: cni0: port 2(veth7b87149a) entered blocking state Mar 17 17:41:58.970883 kernel: cni0: port 2(veth7b87149a) entered forwarding state Mar 17 17:41:58.974558 containerd[1452]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"} Mar 17 17:41:58.974558 containerd[1452]: delegateAdd: netconf sent to delegate plugin: Mar 17 17:41:58.993955 containerd[1452]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-03-17T17:41:58.993780059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:41:58.993955 containerd[1452]: time="2025-03-17T17:41:58.993837830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:41:58.993955 containerd[1452]: time="2025-03-17T17:41:58.993893080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:58.994878 containerd[1452]: time="2025-03-17T17:41:58.993995100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:59.027132 systemd[1]: Started cri-containerd-132f90f9e7162d63c9a2230e20838f6dc0d155ddbf1814c77b1930198b75e049.scope - libcontainer container 132f90f9e7162d63c9a2230e20838f6dc0d155ddbf1814c77b1930198b75e049. Mar 17 17:41:59.042295 systemd-resolved[1371]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:41:59.069520 containerd[1452]: time="2025-03-17T17:41:59.069475335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4vnkr,Uid:6e0be397-1bb3-4c4a-8d88-7b8e030f37cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"132f90f9e7162d63c9a2230e20838f6dc0d155ddbf1814c77b1930198b75e049\"" Mar 17 17:41:59.071782 containerd[1452]: time="2025-03-17T17:41:59.071752712Z" level=info msg="CreateContainer within sandbox \"132f90f9e7162d63c9a2230e20838f6dc0d155ddbf1814c77b1930198b75e049\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:41:59.086129 containerd[1452]: time="2025-03-17T17:41:59.086084176Z" level=info msg="CreateContainer within sandbox \"132f90f9e7162d63c9a2230e20838f6dc0d155ddbf1814c77b1930198b75e049\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f16f5ab01d1ad921d187b427efde830f4831714947f06a87f2cf6625699efd83\"" Mar 17 17:41:59.086977 containerd[1452]: time="2025-03-17T17:41:59.086926130Z" level=info msg="StartContainer for \"f16f5ab01d1ad921d187b427efde830f4831714947f06a87f2cf6625699efd83\"" Mar 17 17:41:59.114064 systemd[1]: Started cri-containerd-f16f5ab01d1ad921d187b427efde830f4831714947f06a87f2cf6625699efd83.scope - libcontainer container f16f5ab01d1ad921d187b427efde830f4831714947f06a87f2cf6625699efd83. Mar 17 17:41:59.137329 containerd[1452]: time="2025-03-17T17:41:59.137282511Z" level=info msg="StartContainer for \"f16f5ab01d1ad921d187b427efde830f4831714947f06a87f2cf6625699efd83\" returns successfully" Mar 17 17:41:59.837150 systemd[1]: Started sshd@5-10.0.0.85:22-10.0.0.1:59700.service - OpenSSH per-connection server daemon (10.0.0.1:59700). Mar 17 17:41:59.877167 sshd[3446]: Accepted publickey for core from 10.0.0.1 port 59700 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:41:59.878582 sshd-session[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:59.882299 systemd-logind[1434]: New session 6 of user core. Mar 17 17:41:59.890212 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:42:00.036835 kubelet[2485]: I0317 17:42:00.036501 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4vnkr" podStartSLOduration=21.035204477 podStartE2EDuration="21.035204477s" podCreationTimestamp="2025-03-17 17:41:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:42:00.034975876 +0000 UTC m=+26.199772831" watchObservedRunningTime="2025-03-17 17:42:00.035204477 +0000 UTC m=+26.200001431" Mar 17 17:42:00.040143 sshd[3448]: Connection closed by 10.0.0.1 port 59700 Mar 17 17:42:00.039126 sshd-session[3446]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:00.043050 systemd[1]: sshd@5-10.0.0.85:22-10.0.0.1:59700.service: Deactivated successfully. Mar 17 17:42:00.045338 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:42:00.048768 systemd-logind[1434]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:42:00.049947 systemd-logind[1434]: Removed session 6. Mar 17 17:42:00.592000 systemd-networkd[1367]: veth7b87149a: Gained IPv6LL Mar 17 17:42:05.052332 systemd[1]: Started sshd@6-10.0.0.85:22-10.0.0.1:42788.service - OpenSSH per-connection server daemon (10.0.0.1:42788). Mar 17 17:42:05.097551 sshd[3501]: Accepted publickey for core from 10.0.0.1 port 42788 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:42:05.098937 sshd-session[3501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:05.103028 systemd-logind[1434]: New session 7 of user core. Mar 17 17:42:05.118050 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:42:05.232379 sshd[3503]: Connection closed by 10.0.0.1 port 42788 Mar 17 17:42:05.233130 sshd-session[3501]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:05.236380 systemd[1]: sshd@6-10.0.0.85:22-10.0.0.1:42788.service: Deactivated successfully. Mar 17 17:42:05.238190 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:42:05.239490 systemd-logind[1434]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:42:05.240402 systemd-logind[1434]: Removed session 7. Mar 17 17:42:10.249155 systemd[1]: Started sshd@7-10.0.0.85:22-10.0.0.1:42794.service - OpenSSH per-connection server daemon (10.0.0.1:42794). Mar 17 17:42:10.287542 sshd[3545]: Accepted publickey for core from 10.0.0.1 port 42794 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:42:10.288768 sshd-session[3545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:10.292981 systemd-logind[1434]: New session 8 of user core. Mar 17 17:42:10.302041 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:42:10.419041 sshd[3547]: Connection closed by 10.0.0.1 port 42794 Mar 17 17:42:10.419743 sshd-session[3545]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:10.429061 systemd[1]: sshd@7-10.0.0.85:22-10.0.0.1:42794.service: Deactivated successfully. Mar 17 17:42:10.430529 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:42:10.431180 systemd-logind[1434]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:42:10.438138 systemd[1]: Started sshd@8-10.0.0.85:22-10.0.0.1:42796.service - OpenSSH per-connection server daemon (10.0.0.1:42796). Mar 17 17:42:10.439097 systemd-logind[1434]: Removed session 8. Mar 17 17:42:10.473286 sshd[3560]: Accepted publickey for core from 10.0.0.1 port 42796 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:42:10.474433 sshd-session[3560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:10.480353 systemd-logind[1434]: New session 9 of user core. Mar 17 17:42:10.491076 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:42:10.639380 sshd[3563]: Connection closed by 10.0.0.1 port 42796 Mar 17 17:42:10.641431 sshd-session[3560]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:10.649388 systemd[1]: Started sshd@9-10.0.0.85:22-10.0.0.1:42800.service - OpenSSH per-connection server daemon (10.0.0.1:42800). Mar 17 17:42:10.652486 systemd[1]: sshd@8-10.0.0.85:22-10.0.0.1:42796.service: Deactivated successfully. Mar 17 17:42:10.654321 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:42:10.660719 systemd-logind[1434]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:42:10.663229 systemd-logind[1434]: Removed session 9. Mar 17 17:42:10.698464 sshd[3573]: Accepted publickey for core from 10.0.0.1 port 42800 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:42:10.699623 sshd-session[3573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:10.705016 systemd-logind[1434]: New session 10 of user core. Mar 17 17:42:10.717040 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:42:10.833759 sshd[3578]: Connection closed by 10.0.0.1 port 42800 Mar 17 17:42:10.833616 sshd-session[3573]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:10.837146 systemd[1]: sshd@9-10.0.0.85:22-10.0.0.1:42800.service: Deactivated successfully. Mar 17 17:42:10.839588 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:42:10.841576 systemd-logind[1434]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:42:10.842522 systemd-logind[1434]: Removed session 10. Mar 17 17:42:15.845667 systemd[1]: Started sshd@10-10.0.0.85:22-10.0.0.1:57714.service - OpenSSH per-connection server daemon (10.0.0.1:57714). Mar 17 17:42:15.885622 sshd[3612]: Accepted publickey for core from 10.0.0.1 port 57714 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:42:15.886781 sshd-session[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:15.890919 systemd-logind[1434]: New session 11 of user core. Mar 17 17:42:15.897002 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:42:16.008679 sshd[3614]: Connection closed by 10.0.0.1 port 57714 Mar 17 17:42:16.009063 sshd-session[3612]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:16.019031 systemd[1]: sshd@10-10.0.0.85:22-10.0.0.1:57714.service: Deactivated successfully. Mar 17 17:42:16.025600 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:42:16.026310 systemd-logind[1434]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:42:16.032169 systemd[1]: Started sshd@11-10.0.0.85:22-10.0.0.1:57716.service - OpenSSH per-connection server daemon (10.0.0.1:57716). Mar 17 17:42:16.033176 systemd-logind[1434]: Removed session 11. Mar 17 17:42:16.067681 sshd[3626]: Accepted publickey for core from 10.0.0.1 port 57716 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:42:16.068762 sshd-session[3626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:16.072901 systemd-logind[1434]: New session 12 of user core. Mar 17 17:42:16.081979 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:42:16.319073 sshd[3629]: Connection closed by 10.0.0.1 port 57716 Mar 17 17:42:16.319516 sshd-session[3626]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:16.332001 systemd[1]: sshd@11-10.0.0.85:22-10.0.0.1:57716.service: Deactivated successfully. Mar 17 17:42:16.333566 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:42:16.335007 systemd-logind[1434]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:42:16.341261 systemd[1]: Started sshd@12-10.0.0.85:22-10.0.0.1:57720.service - OpenSSH per-connection server daemon (10.0.0.1:57720). Mar 17 17:42:16.342334 systemd-logind[1434]: Removed session 12. Mar 17 17:42:16.375537 sshd[3661]: Accepted publickey for core from 10.0.0.1 port 57720 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:42:16.376682 sshd-session[3661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:16.381660 systemd-logind[1434]: New session 13 of user core. Mar 17 17:42:16.392028 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:42:16.994696 sshd[3664]: Connection closed by 10.0.0.1 port 57720 Mar 17 17:42:16.995001 sshd-session[3661]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:17.006731 systemd[1]: sshd@12-10.0.0.85:22-10.0.0.1:57720.service: Deactivated successfully. Mar 17 17:42:17.008234 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:42:17.011211 systemd-logind[1434]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:42:17.018230 systemd[1]: Started sshd@13-10.0.0.85:22-10.0.0.1:57730.service - OpenSSH per-connection server daemon (10.0.0.1:57730). Mar 17 17:42:17.020320 systemd-logind[1434]: Removed session 13. Mar 17 17:42:17.055535 sshd[3683]: Accepted publickey for core from 10.0.0.1 port 57730 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:42:17.056606 sshd-session[3683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:17.060598 systemd-logind[1434]: New session 14 of user core. Mar 17 17:42:17.066998 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:42:17.293277 sshd[3686]: Connection closed by 10.0.0.1 port 57730 Mar 17 17:42:17.293833 sshd-session[3683]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:17.306183 systemd[1]: sshd@13-10.0.0.85:22-10.0.0.1:57730.service: Deactivated successfully. Mar 17 17:42:17.309132 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:42:17.309893 systemd-logind[1434]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:42:17.321101 systemd[1]: Started sshd@14-10.0.0.85:22-10.0.0.1:57742.service - OpenSSH per-connection server daemon (10.0.0.1:57742). Mar 17 17:42:17.322111 systemd-logind[1434]: Removed session 14. Mar 17 17:42:17.361238 sshd[3697]: Accepted publickey for core from 10.0.0.1 port 57742 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:42:17.362677 sshd-session[3697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:17.366805 systemd-logind[1434]: New session 15 of user core. Mar 17 17:42:17.378003 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:42:17.486893 sshd[3700]: Connection closed by 10.0.0.1 port 57742 Mar 17 17:42:17.487417 sshd-session[3697]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:17.490621 systemd[1]: sshd@14-10.0.0.85:22-10.0.0.1:57742.service: Deactivated successfully. Mar 17 17:42:17.492512 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:42:17.493219 systemd-logind[1434]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:42:17.494112 systemd-logind[1434]: Removed session 15. Mar 17 17:42:22.499628 systemd[1]: Started sshd@15-10.0.0.85:22-10.0.0.1:39460.service - OpenSSH per-connection server daemon (10.0.0.1:39460). Mar 17 17:42:22.539468 sshd[3736]: Accepted publickey for core from 10.0.0.1 port 39460 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:42:22.540871 sshd-session[3736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:22.547119 systemd-logind[1434]: New session 16 of user core. Mar 17 17:42:22.553041 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:42:22.664020 sshd[3738]: Connection closed by 10.0.0.1 port 39460 Mar 17 17:42:22.664558 sshd-session[3736]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:22.668015 systemd[1]: sshd@15-10.0.0.85:22-10.0.0.1:39460.service: Deactivated successfully. Mar 17 17:42:22.669689 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:42:22.670919 systemd-logind[1434]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:42:22.672250 systemd-logind[1434]: Removed session 16. Mar 17 17:42:27.695224 systemd[1]: Started sshd@16-10.0.0.85:22-10.0.0.1:39474.service - OpenSSH per-connection server daemon (10.0.0.1:39474). Mar 17 17:42:27.731204 sshd[3773]: Accepted publickey for core from 10.0.0.1 port 39474 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:42:27.732475 sshd-session[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:27.737930 systemd-logind[1434]: New session 17 of user core. Mar 17 17:42:27.749064 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:42:27.873044 sshd[3775]: Connection closed by 10.0.0.1 port 39474 Mar 17 17:42:27.873684 sshd-session[3773]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:27.877690 systemd[1]: sshd@16-10.0.0.85:22-10.0.0.1:39474.service: Deactivated successfully. Mar 17 17:42:27.879440 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:42:27.880318 systemd-logind[1434]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:42:27.882301 systemd-logind[1434]: Removed session 17. Mar 17 17:42:32.885631 systemd[1]: Started sshd@17-10.0.0.85:22-10.0.0.1:56814.service - OpenSSH per-connection server daemon (10.0.0.1:56814). Mar 17 17:42:32.924404 sshd[3812]: Accepted publickey for core from 10.0.0.1 port 56814 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:42:32.925603 sshd-session[3812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:32.930027 systemd-logind[1434]: New session 18 of user core. Mar 17 17:42:32.939013 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:42:33.044948 sshd[3814]: Connection closed by 10.0.0.1 port 56814 Mar 17 17:42:33.045609 sshd-session[3812]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:33.048113 systemd[1]: sshd@17-10.0.0.85:22-10.0.0.1:56814.service: Deactivated successfully. Mar 17 17:42:33.049673 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:42:33.052035 systemd-logind[1434]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:42:33.052894 systemd-logind[1434]: Removed session 18.