Jan 29 10:54:38.941059 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 10:54:38.941081 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 29 10:54:38.941091 kernel: KASLR enabled Jan 29 10:54:38.941097 kernel: efi: EFI v2.7 by EDK II Jan 29 10:54:38.941103 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jan 29 10:54:38.941109 kernel: random: crng init done Jan 29 10:54:38.941116 kernel: secureboot: Secure boot disabled Jan 29 10:54:38.941122 kernel: ACPI: Early table checksum verification disabled Jan 29 10:54:38.941128 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 29 10:54:38.941136 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 10:54:38.941142 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:54:38.941148 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:54:38.941153 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:54:38.941159 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:54:38.941166 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:54:38.941174 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:54:38.941180 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:54:38.941186 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:54:38.941192 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 10:54:38.941198 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 10:54:38.941204 kernel: NUMA: Failed to initialise from firmware Jan 29 10:54:38.941211 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 10:54:38.941217 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 29 10:54:38.941223 kernel: Zone ranges: Jan 29 10:54:38.941229 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 10:54:38.941236 kernel: DMA32 empty Jan 29 10:54:38.941242 kernel: Normal empty Jan 29 10:54:38.941248 kernel: Movable zone start for each node Jan 29 10:54:38.941254 kernel: Early memory node ranges Jan 29 10:54:38.941260 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jan 29 10:54:38.941266 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jan 29 10:54:38.941280 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jan 29 10:54:38.941287 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 10:54:38.941296 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 10:54:38.941303 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 10:54:38.941311 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 10:54:38.941324 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 10:54:38.941334 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 10:54:38.941340 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 10:54:38.941347 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 10:54:38.941355 kernel: psci: probing for conduit method from ACPI. Jan 29 10:54:38.941362 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 10:54:38.941369 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 10:54:38.941377 kernel: psci: Trusted OS migration not required Jan 29 10:54:38.941384 kernel: psci: SMC Calling Convention v1.1 Jan 29 10:54:38.941391 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 10:54:38.941398 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 10:54:38.941404 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 10:54:38.941411 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 10:54:38.941418 kernel: Detected PIPT I-cache on CPU0 Jan 29 10:54:38.941424 kernel: CPU features: detected: GIC system register CPU interface Jan 29 10:54:38.941431 kernel: CPU features: detected: Hardware dirty bit management Jan 29 10:54:38.941437 kernel: CPU features: detected: Spectre-v4 Jan 29 10:54:38.941445 kernel: CPU features: detected: Spectre-BHB Jan 29 10:54:38.941451 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 10:54:38.941458 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 10:54:38.941464 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 10:54:38.941471 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 10:54:38.941477 kernel: alternatives: applying boot alternatives Jan 29 10:54:38.941484 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 10:54:38.941492 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 10:54:38.941499 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 10:54:38.941506 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 10:54:38.941513 kernel: Fallback order for Node 0: 0 Jan 29 10:54:38.941520 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 10:54:38.941527 kernel: Policy zone: DMA Jan 29 10:54:38.941533 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 10:54:38.941540 kernel: software IO TLB: area num 4. Jan 29 10:54:38.941547 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 10:54:38.941553 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Jan 29 10:54:38.941560 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 10:54:38.941567 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 10:54:38.941574 kernel: rcu: RCU event tracing is enabled. Jan 29 10:54:38.941581 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 10:54:38.941588 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 10:54:38.941594 kernel: Tracing variant of Tasks RCU enabled. Jan 29 10:54:38.941602 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 10:54:38.941608 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 10:54:38.941615 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 10:54:38.941621 kernel: GICv3: 256 SPIs implemented Jan 29 10:54:38.941628 kernel: GICv3: 0 Extended SPIs implemented Jan 29 10:54:38.941634 kernel: Root IRQ handler: gic_handle_irq Jan 29 10:54:38.941640 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 10:54:38.941647 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 10:54:38.941653 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 10:54:38.941660 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 10:54:38.941666 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 10:54:38.941674 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 10:54:38.941681 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 10:54:38.941687 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 10:54:38.941694 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 10:54:38.941700 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 10:54:38.941707 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 10:54:38.941714 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 10:54:38.941730 kernel: arm-pv: using stolen time PV Jan 29 10:54:38.941737 kernel: Console: colour dummy device 80x25 Jan 29 10:54:38.941744 kernel: ACPI: Core revision 20230628 Jan 29 10:54:38.941751 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 10:54:38.941759 kernel: pid_max: default: 32768 minimum: 301 Jan 29 10:54:38.941766 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 10:54:38.941773 kernel: landlock: Up and running. Jan 29 10:54:38.941779 kernel: SELinux: Initializing. Jan 29 10:54:38.941786 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:54:38.941792 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:54:38.941799 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 10:54:38.941806 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 10:54:38.941813 kernel: rcu: Hierarchical SRCU implementation. Jan 29 10:54:38.941821 kernel: rcu: Max phase no-delay instances is 400. Jan 29 10:54:38.941828 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 10:54:38.941834 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 10:54:38.941841 kernel: Remapping and enabling EFI services. Jan 29 10:54:38.941848 kernel: smp: Bringing up secondary CPUs ... Jan 29 10:54:38.941855 kernel: Detected PIPT I-cache on CPU1 Jan 29 10:54:38.941861 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 10:54:38.941868 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 10:54:38.941875 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 10:54:38.941883 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 10:54:38.941890 kernel: Detected PIPT I-cache on CPU2 Jan 29 10:54:38.941902 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 10:54:38.941911 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 10:54:38.941918 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 10:54:38.941925 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 10:54:38.941932 kernel: Detected PIPT I-cache on CPU3 Jan 29 10:54:38.941939 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 10:54:38.941946 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 10:54:38.941954 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 10:54:38.941961 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 10:54:38.941968 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 10:54:38.941975 kernel: SMP: Total of 4 processors activated. Jan 29 10:54:38.941982 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 10:54:38.941989 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 10:54:38.941996 kernel: CPU features: detected: Common not Private translations Jan 29 10:54:38.942003 kernel: CPU features: detected: CRC32 instructions Jan 29 10:54:38.942011 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 10:54:38.942018 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 10:54:38.942025 kernel: CPU features: detected: LSE atomic instructions Jan 29 10:54:38.942032 kernel: CPU features: detected: Privileged Access Never Jan 29 10:54:38.942039 kernel: CPU features: detected: RAS Extension Support Jan 29 10:54:38.942046 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 10:54:38.942053 kernel: CPU: All CPU(s) started at EL1 Jan 29 10:54:38.942060 kernel: alternatives: applying system-wide alternatives Jan 29 10:54:38.942067 kernel: devtmpfs: initialized Jan 29 10:54:38.942076 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 10:54:38.942083 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 10:54:38.942091 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 10:54:38.942098 kernel: SMBIOS 3.0.0 present. Jan 29 10:54:38.942106 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 29 10:54:38.942113 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 10:54:38.942120 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 10:54:38.942128 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 10:54:38.942135 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 10:54:38.942144 kernel: audit: initializing netlink subsys (disabled) Jan 29 10:54:38.942151 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jan 29 10:54:38.942158 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 10:54:38.942165 kernel: cpuidle: using governor menu Jan 29 10:54:38.942172 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 10:54:38.942179 kernel: ASID allocator initialised with 32768 entries Jan 29 10:54:38.942186 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 10:54:38.942193 kernel: Serial: AMBA PL011 UART driver Jan 29 10:54:38.942200 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 10:54:38.942209 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 10:54:38.942216 kernel: Modules: 508880 pages in range for PLT usage Jan 29 10:54:38.942223 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 10:54:38.942230 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 10:54:38.942237 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 10:54:38.942244 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 10:54:38.942252 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 10:54:38.942259 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 10:54:38.942266 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 10:54:38.942274 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 10:54:38.942281 kernel: ACPI: Added _OSI(Module Device) Jan 29 10:54:38.942288 kernel: ACPI: Added _OSI(Processor Device) Jan 29 10:54:38.942296 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 10:54:38.942303 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 10:54:38.942310 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 10:54:38.942317 kernel: ACPI: Interpreter enabled Jan 29 10:54:38.942329 kernel: ACPI: Using GIC for interrupt routing Jan 29 10:54:38.942336 kernel: ACPI: MCFG table detected, 1 entries Jan 29 10:54:38.942344 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 10:54:38.942354 kernel: printk: console [ttyAMA0] enabled Jan 29 10:54:38.942361 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 10:54:38.942502 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 10:54:38.942576 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 10:54:38.942640 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 10:54:38.942707 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 10:54:38.942783 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 10:54:38.942796 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 10:54:38.942803 kernel: PCI host bridge to bus 0000:00 Jan 29 10:54:38.942875 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 10:54:38.942944 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 10:54:38.943004 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 10:54:38.943064 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 10:54:38.943142 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 10:54:38.943226 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 10:54:38.943294 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 10:54:38.943373 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 10:54:38.943440 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 10:54:38.943507 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 10:54:38.943575 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 10:54:38.943644 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 10:54:38.943703 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 10:54:38.943775 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 10:54:38.943834 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 10:54:38.943844 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 10:54:38.943851 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 10:54:38.943859 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 10:54:38.943866 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 10:54:38.943876 kernel: iommu: Default domain type: Translated Jan 29 10:54:38.943883 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 10:54:38.943890 kernel: efivars: Registered efivars operations Jan 29 10:54:38.943897 kernel: vgaarb: loaded Jan 29 10:54:38.943905 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 10:54:38.943912 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 10:54:38.943920 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 10:54:38.943926 kernel: pnp: PnP ACPI init Jan 29 10:54:38.943997 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 10:54:38.944022 kernel: pnp: PnP ACPI: found 1 devices Jan 29 10:54:38.944029 kernel: NET: Registered PF_INET protocol family Jan 29 10:54:38.944037 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 10:54:38.944045 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 10:54:38.944052 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 10:54:38.944059 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 10:54:38.944066 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 10:54:38.944074 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 10:54:38.944082 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:54:38.944090 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:54:38.944097 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 10:54:38.944105 kernel: PCI: CLS 0 bytes, default 64 Jan 29 10:54:38.944112 kernel: kvm [1]: HYP mode not available Jan 29 10:54:38.944119 kernel: Initialise system trusted keyrings Jan 29 10:54:38.944127 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 10:54:38.944135 kernel: Key type asymmetric registered Jan 29 10:54:38.944142 kernel: Asymmetric key parser 'x509' registered Jan 29 10:54:38.944150 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 10:54:38.944158 kernel: io scheduler mq-deadline registered Jan 29 10:54:38.944165 kernel: io scheduler kyber registered Jan 29 10:54:38.944172 kernel: io scheduler bfq registered Jan 29 10:54:38.944179 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 10:54:38.944187 kernel: ACPI: button: Power Button [PWRB] Jan 29 10:54:38.944195 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 10:54:38.944263 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 10:54:38.944272 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 10:54:38.944281 kernel: thunder_xcv, ver 1.0 Jan 29 10:54:38.944288 kernel: thunder_bgx, ver 1.0 Jan 29 10:54:38.944295 kernel: nicpf, ver 1.0 Jan 29 10:54:38.944302 kernel: nicvf, ver 1.0 Jan 29 10:54:38.944384 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 10:54:38.944448 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T10:54:38 UTC (1738148078) Jan 29 10:54:38.944458 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 10:54:38.944467 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 10:54:38.944474 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 10:54:38.944484 kernel: watchdog: Hard watchdog permanently disabled Jan 29 10:54:38.944491 kernel: NET: Registered PF_INET6 protocol family Jan 29 10:54:38.944498 kernel: Segment Routing with IPv6 Jan 29 10:54:38.944506 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 10:54:38.944513 kernel: NET: Registered PF_PACKET protocol family Jan 29 10:54:38.944520 kernel: Key type dns_resolver registered Jan 29 10:54:38.944527 kernel: registered taskstats version 1 Jan 29 10:54:38.944534 kernel: Loading compiled-in X.509 certificates Jan 29 10:54:38.944541 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 29 10:54:38.944550 kernel: Key type .fscrypt registered Jan 29 10:54:38.944557 kernel: Key type fscrypt-provisioning registered Jan 29 10:54:38.944564 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 10:54:38.944571 kernel: ima: Allocated hash algorithm: sha1 Jan 29 10:54:38.944578 kernel: ima: No architecture policies found Jan 29 10:54:38.944585 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 10:54:38.944592 kernel: clk: Disabling unused clocks Jan 29 10:54:38.944599 kernel: Freeing unused kernel memory: 39936K Jan 29 10:54:38.944607 kernel: Run /init as init process Jan 29 10:54:38.944614 kernel: with arguments: Jan 29 10:54:38.944621 kernel: /init Jan 29 10:54:38.944628 kernel: with environment: Jan 29 10:54:38.944635 kernel: HOME=/ Jan 29 10:54:38.944642 kernel: TERM=linux Jan 29 10:54:38.944649 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 10:54:38.944658 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 10:54:38.944669 systemd[1]: Detected virtualization kvm. Jan 29 10:54:38.944677 systemd[1]: Detected architecture arm64. Jan 29 10:54:38.944685 systemd[1]: Running in initrd. Jan 29 10:54:38.944692 systemd[1]: No hostname configured, using default hostname. Jan 29 10:54:38.944700 systemd[1]: Hostname set to . Jan 29 10:54:38.944708 systemd[1]: Initializing machine ID from VM UUID. Jan 29 10:54:38.944728 systemd[1]: Queued start job for default target initrd.target. Jan 29 10:54:38.944737 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:54:38.944747 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:54:38.944755 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 10:54:38.944763 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 10:54:38.944771 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 10:54:38.944779 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 10:54:38.944788 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 10:54:38.944796 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 10:54:38.944805 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:54:38.944813 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:54:38.944821 systemd[1]: Reached target paths.target - Path Units. Jan 29 10:54:38.944828 systemd[1]: Reached target slices.target - Slice Units. Jan 29 10:54:38.944836 systemd[1]: Reached target swap.target - Swaps. Jan 29 10:54:38.944844 systemd[1]: Reached target timers.target - Timer Units. Jan 29 10:54:38.944851 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:54:38.944859 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:54:38.944866 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 10:54:38.944876 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 10:54:38.944883 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:54:38.944891 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 10:54:38.944903 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:54:38.944911 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 10:54:38.944919 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 10:54:38.944927 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 10:54:38.944935 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 10:54:38.944944 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 10:54:38.944952 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 10:54:38.944961 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 10:54:38.944969 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:54:38.944977 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 10:54:38.944985 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:54:38.944993 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 10:54:38.945019 systemd-journald[239]: Collecting audit messages is disabled. Jan 29 10:54:38.945037 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 10:54:38.945047 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 10:54:38.945054 kernel: Bridge firewalling registered Jan 29 10:54:38.945062 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:54:38.945070 systemd-journald[239]: Journal started Jan 29 10:54:38.945092 systemd-journald[239]: Runtime Journal (/run/log/journal/1749a29b916b424a80bf28af9d818a7a) is 5.9M, max 47.3M, 41.4M free. Jan 29 10:54:38.927222 systemd-modules-load[240]: Inserted module 'overlay' Jan 29 10:54:38.947486 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 10:54:38.942857 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 29 10:54:38.949585 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 10:54:38.950985 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:54:38.962884 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:54:38.967883 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:54:38.969462 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 10:54:38.972518 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 10:54:38.980188 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:54:38.982947 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:54:38.984424 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:54:38.986512 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:54:39.003895 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 10:54:39.006275 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 10:54:39.014630 dracut-cmdline[276]: dracut-dracut-053 Jan 29 10:54:39.017120 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 10:54:39.032556 systemd-resolved[278]: Positive Trust Anchors: Jan 29 10:54:39.032573 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 10:54:39.032605 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 10:54:39.037353 systemd-resolved[278]: Defaulting to hostname 'linux'. Jan 29 10:54:39.038566 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 10:54:39.042005 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:54:39.085750 kernel: SCSI subsystem initialized Jan 29 10:54:39.090734 kernel: Loading iSCSI transport class v2.0-870. Jan 29 10:54:39.097743 kernel: iscsi: registered transport (tcp) Jan 29 10:54:39.110771 kernel: iscsi: registered transport (qla4xxx) Jan 29 10:54:39.110813 kernel: QLogic iSCSI HBA Driver Jan 29 10:54:39.151874 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 10:54:39.159897 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 10:54:39.176751 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 10:54:39.176809 kernel: device-mapper: uevent: version 1.0.3 Jan 29 10:54:39.177899 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 10:54:39.223749 kernel: raid6: neonx8 gen() 15810 MB/s Jan 29 10:54:39.239737 kernel: raid6: neonx4 gen() 15782 MB/s Jan 29 10:54:39.256741 kernel: raid6: neonx2 gen() 13199 MB/s Jan 29 10:54:39.273743 kernel: raid6: neonx1 gen() 10407 MB/s Jan 29 10:54:39.290754 kernel: raid6: int64x8 gen() 6788 MB/s Jan 29 10:54:39.307754 kernel: raid6: int64x4 gen() 7343 MB/s Jan 29 10:54:39.324748 kernel: raid6: int64x2 gen() 6104 MB/s Jan 29 10:54:39.341851 kernel: raid6: int64x1 gen() 5052 MB/s Jan 29 10:54:39.341867 kernel: raid6: using algorithm neonx8 gen() 15810 MB/s Jan 29 10:54:39.359904 kernel: raid6: .... xor() 11935 MB/s, rmw enabled Jan 29 10:54:39.359921 kernel: raid6: using neon recovery algorithm Jan 29 10:54:39.364739 kernel: xor: measuring software checksum speed Jan 29 10:54:39.365940 kernel: 8regs : 18890 MB/sec Jan 29 10:54:39.365952 kernel: 32regs : 20860 MB/sec Jan 29 10:54:39.367220 kernel: arm64_neon : 27946 MB/sec Jan 29 10:54:39.367233 kernel: xor: using function: arm64_neon (27946 MB/sec) Jan 29 10:54:39.417741 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 10:54:39.427809 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:54:39.435904 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:54:39.446664 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jan 29 10:54:39.449747 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:54:39.456898 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 10:54:39.468824 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jan 29 10:54:39.493759 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:54:39.509882 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 10:54:39.547768 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:54:39.553852 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 10:54:39.566292 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 10:54:39.568124 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:54:39.569959 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:54:39.572243 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 10:54:39.581890 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 10:54:39.593075 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:54:39.598748 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 10:54:39.619683 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 10:54:39.620959 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 10:54:39.620974 kernel: GPT:9289727 != 19775487 Jan 29 10:54:39.620990 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 10:54:39.620999 kernel: GPT:9289727 != 19775487 Jan 29 10:54:39.621009 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 10:54:39.621019 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 10:54:39.604930 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:54:39.605050 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:54:39.607056 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:54:39.608275 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:54:39.608458 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:54:39.614127 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:54:39.626318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:54:39.638753 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (514) Jan 29 10:54:39.640787 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:54:39.644927 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (507) Jan 29 10:54:39.647567 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 10:54:39.654759 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 10:54:39.659309 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 10:54:39.663256 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 10:54:39.664560 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 10:54:39.683882 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 10:54:39.688959 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:54:39.693860 disk-uuid[551]: Primary Header is updated. Jan 29 10:54:39.693860 disk-uuid[551]: Secondary Entries is updated. Jan 29 10:54:39.693860 disk-uuid[551]: Secondary Header is updated. Jan 29 10:54:39.697109 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 10:54:39.713274 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:54:40.707738 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 10:54:40.708395 disk-uuid[552]: The operation has completed successfully. Jan 29 10:54:40.737153 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 10:54:40.737252 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 10:54:40.757870 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 10:54:40.761082 sh[573]: Success Jan 29 10:54:40.778909 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 10:54:40.825304 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 10:54:40.827353 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 10:54:40.828437 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 10:54:40.842079 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 29 10:54:40.842130 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:54:40.842142 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 10:54:40.843005 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 10:54:40.843766 kernel: BTRFS info (device dm-0): using free space tree Jan 29 10:54:40.847082 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 10:54:40.848607 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 10:54:40.849476 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 10:54:40.852246 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 10:54:40.865669 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:54:40.865742 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:54:40.865755 kernel: BTRFS info (device vda6): using free space tree Jan 29 10:54:40.868735 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 10:54:40.876335 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 10:54:40.878192 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:54:40.886336 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 10:54:40.896870 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 10:54:40.950695 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:54:40.963902 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 10:54:40.983568 systemd-networkd[760]: lo: Link UP Jan 29 10:54:40.983578 systemd-networkd[760]: lo: Gained carrier Jan 29 10:54:40.984490 systemd-networkd[760]: Enumeration completed Jan 29 10:54:40.984778 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 10:54:40.985087 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:54:40.985090 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:54:40.985857 systemd-networkd[760]: eth0: Link UP Jan 29 10:54:40.985860 systemd-networkd[760]: eth0: Gained carrier Jan 29 10:54:40.985866 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:54:40.986270 systemd[1]: Reached target network.target - Network. Jan 29 10:54:41.003780 systemd-networkd[760]: eth0: DHCPv4 address 10.0.0.63/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 10:54:41.005341 ignition[677]: Ignition 2.20.0 Jan 29 10:54:41.005357 ignition[677]: Stage: fetch-offline Jan 29 10:54:41.005390 ignition[677]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:54:41.005399 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 10:54:41.005546 ignition[677]: parsed url from cmdline: "" Jan 29 10:54:41.005550 ignition[677]: no config URL provided Jan 29 10:54:41.005555 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 10:54:41.005562 ignition[677]: no config at "/usr/lib/ignition/user.ign" Jan 29 10:54:41.005588 ignition[677]: op(1): [started] loading QEMU firmware config module Jan 29 10:54:41.005593 ignition[677]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 10:54:41.018949 ignition[677]: op(1): [finished] loading QEMU firmware config module Jan 29 10:54:41.039582 ignition[677]: parsing config with SHA512: 113fdf3431d0482d315dda64a8057d485e063c38de7ec897c71246e4ec6e5b189b5be804c142abe48b4c7fbe1e690e1e8a0a6b24ccb0ed32c81fdb292593df62 Jan 29 10:54:41.043915 unknown[677]: fetched base config from "system" Jan 29 10:54:41.043925 unknown[677]: fetched user config from "qemu" Jan 29 10:54:41.044278 ignition[677]: fetch-offline: fetch-offline passed Jan 29 10:54:41.046372 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:54:41.044363 ignition[677]: Ignition finished successfully Jan 29 10:54:41.047770 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 10:54:41.057858 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 10:54:41.067667 ignition[773]: Ignition 2.20.0 Jan 29 10:54:41.067677 ignition[773]: Stage: kargs Jan 29 10:54:41.067866 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:54:41.067876 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 10:54:41.071235 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 10:54:41.068824 ignition[773]: kargs: kargs passed Jan 29 10:54:41.068870 ignition[773]: Ignition finished successfully Jan 29 10:54:41.080883 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 10:54:41.091164 ignition[781]: Ignition 2.20.0 Jan 29 10:54:41.091175 ignition[781]: Stage: disks Jan 29 10:54:41.091339 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:54:41.094110 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 10:54:41.091359 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 10:54:41.095304 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 10:54:41.092187 ignition[781]: disks: disks passed Jan 29 10:54:41.097047 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 10:54:41.092231 ignition[781]: Ignition finished successfully Jan 29 10:54:41.099087 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 10:54:41.101091 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 10:54:41.102689 systemd[1]: Reached target basic.target - Basic System. Jan 29 10:54:41.113859 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 10:54:41.124903 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 10:54:41.130328 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 10:54:41.138875 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 10:54:41.183739 kernel: EXT4-fs (vda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 29 10:54:41.183914 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 10:54:41.185155 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 10:54:41.200805 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:54:41.202506 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 10:54:41.203749 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 10:54:41.203786 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 10:54:41.203808 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:54:41.210178 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 10:54:41.212488 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 10:54:41.215848 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (800) Jan 29 10:54:41.218248 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:54:41.218286 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:54:41.219130 kernel: BTRFS info (device vda6): using free space tree Jan 29 10:54:41.223747 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 10:54:41.224549 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:54:41.255753 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 10:54:41.260093 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Jan 29 10:54:41.263462 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 10:54:41.266588 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 10:54:41.345668 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 10:54:41.357808 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 10:54:41.360526 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 10:54:41.365735 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:54:41.381258 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 10:54:41.383219 ignition[914]: INFO : Ignition 2.20.0 Jan 29 10:54:41.383219 ignition[914]: INFO : Stage: mount Jan 29 10:54:41.383219 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:54:41.383219 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 10:54:41.386924 ignition[914]: INFO : mount: mount passed Jan 29 10:54:41.386924 ignition[914]: INFO : Ignition finished successfully Jan 29 10:54:41.386772 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 10:54:41.397818 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 10:54:41.839275 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 10:54:41.847897 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:54:41.854734 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) Jan 29 10:54:41.856930 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:54:41.856946 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:54:41.856956 kernel: BTRFS info (device vda6): using free space tree Jan 29 10:54:41.859731 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 10:54:41.861078 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:54:41.881146 ignition[944]: INFO : Ignition 2.20.0 Jan 29 10:54:41.881146 ignition[944]: INFO : Stage: files Jan 29 10:54:41.882828 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:54:41.882828 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 10:54:41.882828 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jan 29 10:54:41.886470 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 10:54:41.886470 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 10:54:41.886470 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 10:54:41.886470 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 10:54:41.886470 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 10:54:41.885626 unknown[944]: wrote ssh authorized keys file for user: core Jan 29 10:54:41.894090 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 10:54:41.894090 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 10:54:42.292990 systemd-networkd[760]: eth0: Gained IPv6LL Jan 29 10:54:42.767768 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 10:54:45.624199 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 10:54:45.626585 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 10:54:45.626585 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 10:54:45.626585 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 10:54:45.626585 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 10:54:45.626585 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 10:54:45.626585 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 10:54:45.626585 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 10:54:45.626585 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 10:54:45.626585 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:54:45.626585 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:54:45.626585 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 10:54:45.626585 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 10:54:45.626585 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 10:54:45.626585 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 10:54:46.012330 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 10:54:46.543329 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 10:54:46.543329 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 10:54:46.546999 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 10:54:46.546999 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 10:54:46.546999 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 10:54:46.546999 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 10:54:46.546999 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 10:54:46.546999 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 10:54:46.546999 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 10:54:46.546999 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 10:54:46.569971 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 10:54:46.573255 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 10:54:46.574735 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 10:54:46.574735 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 29 10:54:46.574735 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 10:54:46.574735 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:54:46.574735 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:54:46.574735 ignition[944]: INFO : files: files passed Jan 29 10:54:46.574735 ignition[944]: INFO : Ignition finished successfully Jan 29 10:54:46.576601 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 10:54:46.586890 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 10:54:46.589937 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 10:54:46.592822 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 10:54:46.592912 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 10:54:46.600933 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 10:54:46.604309 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:54:46.604309 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:54:46.607650 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:54:46.608565 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:54:46.610562 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 10:54:46.623871 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 10:54:46.642837 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 10:54:46.643914 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 10:54:46.645283 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 10:54:46.647192 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 10:54:46.649035 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 10:54:46.649822 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 10:54:46.665709 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:54:46.674896 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 10:54:46.682371 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:54:46.683633 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:54:46.685751 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 10:54:46.687573 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 10:54:46.687685 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:54:46.690228 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 10:54:46.692338 systemd[1]: Stopped target basic.target - Basic System. Jan 29 10:54:46.693991 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 10:54:46.695747 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:54:46.697770 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 10:54:46.699772 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 10:54:46.701605 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:54:46.703544 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 10:54:46.705487 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 10:54:46.707196 systemd[1]: Stopped target swap.target - Swaps. Jan 29 10:54:46.708677 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 10:54:46.708813 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:54:46.711155 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:54:46.713045 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:54:46.714925 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 10:54:46.716834 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:54:46.718053 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 10:54:46.718162 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 10:54:46.720956 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 10:54:46.721070 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:54:46.723099 systemd[1]: Stopped target paths.target - Path Units. Jan 29 10:54:46.724700 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 10:54:46.727600 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:54:46.728915 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 10:54:46.731102 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 10:54:46.732759 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 10:54:46.732858 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:54:46.734536 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 10:54:46.734620 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:54:46.736197 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 10:54:46.736304 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:54:46.738160 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 10:54:46.738257 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 10:54:46.751904 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 10:54:46.753527 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 10:54:46.754469 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 10:54:46.754596 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:54:46.756573 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 10:54:46.756683 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:54:46.762751 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 10:54:46.762961 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 10:54:46.766446 ignition[1000]: INFO : Ignition 2.20.0 Jan 29 10:54:46.766446 ignition[1000]: INFO : Stage: umount Jan 29 10:54:46.769182 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:54:46.769182 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 10:54:46.769182 ignition[1000]: INFO : umount: umount passed Jan 29 10:54:46.769182 ignition[1000]: INFO : Ignition finished successfully Jan 29 10:54:46.769849 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 10:54:46.770377 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 10:54:46.770489 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 10:54:46.773138 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 10:54:46.773261 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 10:54:46.775487 systemd[1]: Stopped target network.target - Network. Jan 29 10:54:46.776640 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 10:54:46.776735 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 10:54:46.778451 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 10:54:46.778493 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 10:54:46.780205 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 10:54:46.780246 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 10:54:46.781886 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 10:54:46.781929 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 10:54:46.783655 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 10:54:46.783696 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 10:54:46.785574 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 10:54:46.787784 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 10:54:46.797790 systemd-networkd[760]: eth0: DHCPv6 lease lost Jan 29 10:54:46.799202 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 10:54:46.799327 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 10:54:46.801902 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 10:54:46.802035 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 10:54:46.804593 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 10:54:46.804661 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:54:46.814913 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 10:54:46.815811 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 10:54:46.815876 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:54:46.817936 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 10:54:46.817980 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:54:46.819758 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 10:54:46.819804 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 10:54:46.821961 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 10:54:46.822005 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:54:46.824106 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:54:46.831912 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 10:54:46.832840 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 10:54:46.834066 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 10:54:46.834185 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:54:46.836439 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 10:54:46.836506 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 10:54:46.837671 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 10:54:46.837707 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:54:46.839784 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 10:54:46.839834 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:54:46.842503 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 10:54:46.842551 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 10:54:46.845165 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:54:46.845219 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:54:46.859892 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 10:54:46.860956 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 10:54:46.861036 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:54:46.863266 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 10:54:46.863314 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:54:46.865273 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 10:54:46.865323 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:54:46.867475 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:54:46.867527 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:54:46.869883 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 10:54:46.869964 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 10:54:46.872278 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 10:54:46.874478 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 10:54:46.883552 systemd[1]: Switching root. Jan 29 10:54:46.919362 systemd-journald[239]: Journal stopped Jan 29 10:54:47.650281 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 29 10:54:47.650335 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 10:54:47.650348 kernel: SELinux: policy capability open_perms=1 Jan 29 10:54:47.650357 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 10:54:47.650368 kernel: SELinux: policy capability always_check_network=0 Jan 29 10:54:47.650381 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 10:54:47.650390 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 10:54:47.650403 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 10:54:47.650412 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 10:54:47.650422 kernel: audit: type=1403 audit(1738148087.062:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 10:54:47.650442 systemd[1]: Successfully loaded SELinux policy in 34.231ms. Jan 29 10:54:47.650458 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.797ms. Jan 29 10:54:47.650469 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 10:54:47.650480 systemd[1]: Detected virtualization kvm. Jan 29 10:54:47.650490 systemd[1]: Detected architecture arm64. Jan 29 10:54:47.650502 systemd[1]: Detected first boot. Jan 29 10:54:47.650512 systemd[1]: Initializing machine ID from VM UUID. Jan 29 10:54:47.650522 zram_generator::config[1046]: No configuration found. Jan 29 10:54:47.650535 systemd[1]: Populated /etc with preset unit settings. Jan 29 10:54:47.650546 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 10:54:47.650555 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 10:54:47.650565 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 10:54:47.650576 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 10:54:47.650587 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 10:54:47.650598 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 10:54:47.650610 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 10:54:47.650620 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 10:54:47.650631 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 10:54:47.650641 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 10:54:47.650652 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 10:54:47.650673 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:54:47.650684 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:54:47.650697 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 10:54:47.650707 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 10:54:47.650737 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 10:54:47.650748 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 10:54:47.650759 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 10:54:47.650770 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:54:47.650781 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 10:54:47.650791 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 10:54:47.650802 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 10:54:47.650815 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 10:54:47.650826 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:54:47.650836 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 10:54:47.650846 systemd[1]: Reached target slices.target - Slice Units. Jan 29 10:54:47.650856 systemd[1]: Reached target swap.target - Swaps. Jan 29 10:54:47.650867 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 10:54:47.650877 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 10:54:47.650888 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:54:47.650900 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 10:54:47.650910 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:54:47.650920 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 10:54:47.650930 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 10:54:47.650940 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 10:54:47.650951 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 10:54:47.650960 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 10:54:47.650971 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 10:54:47.650981 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 10:54:47.650993 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 10:54:47.651008 systemd[1]: Reached target machines.target - Containers. Jan 29 10:54:47.651019 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 10:54:47.651029 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:54:47.651039 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 10:54:47.651051 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 10:54:47.651061 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:54:47.651071 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 10:54:47.651083 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:54:47.651094 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 10:54:47.651104 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:54:47.651114 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 10:54:47.651124 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 10:54:47.651134 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 10:54:47.651144 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 10:54:47.651154 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 10:54:47.651164 kernel: fuse: init (API version 7.39) Jan 29 10:54:47.651175 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 10:54:47.651185 kernel: ACPI: bus type drm_connector registered Jan 29 10:54:47.651195 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 10:54:47.651205 kernel: loop: module loaded Jan 29 10:54:47.651214 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 10:54:47.651224 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 10:54:47.651235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 10:54:47.651245 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 10:54:47.651254 systemd[1]: Stopped verity-setup.service. Jan 29 10:54:47.651266 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 10:54:47.651292 systemd-journald[1117]: Collecting audit messages is disabled. Jan 29 10:54:47.651314 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 10:54:47.651324 systemd-journald[1117]: Journal started Jan 29 10:54:47.651350 systemd-journald[1117]: Runtime Journal (/run/log/journal/1749a29b916b424a80bf28af9d818a7a) is 5.9M, max 47.3M, 41.4M free. Jan 29 10:54:47.446524 systemd[1]: Queued start job for default target multi-user.target. Jan 29 10:54:47.461135 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 10:54:47.461486 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 10:54:47.654235 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 10:54:47.654881 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 10:54:47.655950 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 10:54:47.657131 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 10:54:47.658366 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 10:54:47.659606 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 10:54:47.661056 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:54:47.662605 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 10:54:47.662785 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 10:54:47.664157 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:54:47.664299 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:54:47.665742 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 10:54:47.665892 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 10:54:47.667218 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:54:47.667368 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:54:47.668841 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 10:54:47.668979 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 10:54:47.670528 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:54:47.670662 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:54:47.672040 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 10:54:47.673483 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 10:54:47.676038 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 10:54:47.688710 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 10:54:47.704841 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 10:54:47.707005 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 10:54:47.708159 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 10:54:47.708208 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 10:54:47.710187 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 10:54:47.712416 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 10:54:47.714563 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 10:54:47.715681 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:54:47.717938 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 10:54:47.719972 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 10:54:47.721186 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 10:54:47.722178 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 10:54:47.723282 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 10:54:47.726922 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:54:47.728409 systemd-journald[1117]: Time spent on flushing to /var/log/journal/1749a29b916b424a80bf28af9d818a7a is 20.112ms for 857 entries. Jan 29 10:54:47.728409 systemd-journald[1117]: System Journal (/var/log/journal/1749a29b916b424a80bf28af9d818a7a) is 8.0M, max 195.6M, 187.6M free. Jan 29 10:54:47.752357 systemd-journald[1117]: Received client request to flush runtime journal. Jan 29 10:54:47.731894 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 10:54:47.738998 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 10:54:47.741638 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:54:47.743281 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 10:54:47.744824 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 10:54:47.746197 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 10:54:47.756853 kernel: loop0: detected capacity change from 0 to 116784 Jan 29 10:54:47.759001 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 10:54:47.763470 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 10:54:47.765086 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 10:54:47.770734 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 10:54:47.771641 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:54:47.774114 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 10:54:47.782372 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 29 10:54:47.782393 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 29 10:54:47.783981 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 10:54:47.789082 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:54:47.792629 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 10:54:47.795073 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 10:54:47.797743 kernel: loop1: detected capacity change from 0 to 194096 Jan 29 10:54:47.805799 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 10:54:47.806471 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 10:54:47.847966 kernel: loop2: detected capacity change from 0 to 113552 Jan 29 10:54:47.851131 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 10:54:47.859879 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 10:54:47.873411 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 29 10:54:47.873444 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 29 10:54:47.879832 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:54:47.883743 kernel: loop3: detected capacity change from 0 to 116784 Jan 29 10:54:47.889736 kernel: loop4: detected capacity change from 0 to 194096 Jan 29 10:54:47.895740 kernel: loop5: detected capacity change from 0 to 113552 Jan 29 10:54:47.899454 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 10:54:47.899830 (sd-merge)[1185]: Merged extensions into '/usr'. Jan 29 10:54:47.903517 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 10:54:47.903616 systemd[1]: Reloading... Jan 29 10:54:47.965938 zram_generator::config[1212]: No configuration found. Jan 29 10:54:48.016607 ldconfig[1152]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 10:54:48.053824 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:54:48.088330 systemd[1]: Reloading finished in 184 ms. Jan 29 10:54:48.121901 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 10:54:48.124747 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 10:54:48.138920 systemd[1]: Starting ensure-sysext.service... Jan 29 10:54:48.140972 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 10:54:48.163703 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 10:54:48.164255 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 10:54:48.165023 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 10:54:48.165324 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 29 10:54:48.165441 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 29 10:54:48.167860 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 10:54:48.167958 systemd-tmpfiles[1246]: Skipping /boot Jan 29 10:54:48.168185 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Jan 29 10:54:48.168200 systemd[1]: Reloading... Jan 29 10:54:48.176306 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 10:54:48.176415 systemd-tmpfiles[1246]: Skipping /boot Jan 29 10:54:48.212743 zram_generator::config[1276]: No configuration found. Jan 29 10:54:48.280139 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:54:48.314856 systemd[1]: Reloading finished in 146 ms. Jan 29 10:54:48.331002 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 10:54:48.344143 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:54:48.351694 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 10:54:48.353988 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 10:54:48.356393 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 10:54:48.361848 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 10:54:48.375949 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:54:48.378592 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 10:54:48.382439 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 10:54:48.391528 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:54:48.400221 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:54:48.402247 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:54:48.403904 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Jan 29 10:54:48.404460 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:54:48.405871 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:54:48.407588 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 10:54:48.411662 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 10:54:48.424278 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 10:54:48.426847 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 10:54:48.428743 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:54:48.428960 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:54:48.430750 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:54:48.430947 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:54:48.433029 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:54:48.433142 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:54:48.434779 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 10:54:48.436297 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:54:48.446269 augenrules[1349]: No rules Jan 29 10:54:48.448126 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 10:54:48.448486 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 10:54:48.452808 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:54:48.464891 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:54:48.467669 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 10:54:48.471920 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:54:48.474056 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:54:48.475765 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:54:48.478889 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 10:54:48.480960 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 10:54:48.481237 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 10:54:48.485303 systemd[1]: Finished ensure-sysext.service. Jan 29 10:54:48.486370 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:54:48.486496 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:54:48.487919 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 10:54:48.488043 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 10:54:48.491206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:54:48.491334 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:54:48.501464 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 10:54:48.501713 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 10:54:48.503573 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 10:54:48.506113 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:54:48.506279 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:54:48.508694 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 10:54:48.532744 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1348) Jan 29 10:54:48.564108 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 10:54:48.565632 systemd-resolved[1312]: Positive Trust Anchors: Jan 29 10:54:48.565650 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 10:54:48.565682 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 10:54:48.576854 systemd-resolved[1312]: Defaulting to hostname 'linux'. Jan 29 10:54:48.579869 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 10:54:48.581190 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 10:54:48.582683 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 10:54:48.583841 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:54:48.585036 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 10:54:48.589919 systemd-networkd[1380]: lo: Link UP Jan 29 10:54:48.589925 systemd-networkd[1380]: lo: Gained carrier Jan 29 10:54:48.591513 systemd-networkd[1380]: Enumeration completed Jan 29 10:54:48.591707 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 10:54:48.593193 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 10:54:48.594502 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:54:48.594564 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:54:48.595226 systemd[1]: Reached target network.target - Network. Jan 29 10:54:48.597664 systemd-networkd[1380]: eth0: Link UP Jan 29 10:54:48.597675 systemd-networkd[1380]: eth0: Gained carrier Jan 29 10:54:48.597689 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:54:48.607594 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 10:54:48.610378 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:54:48.616805 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.63/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 10:54:48.617472 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. Jan 29 10:54:48.620101 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 10:54:48.620196 systemd-timesyncd[1390]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 10:54:48.620242 systemd-timesyncd[1390]: Initial clock synchronization to Wed 2025-01-29 10:54:48.414194 UTC. Jan 29 10:54:48.623071 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 10:54:48.640526 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 10:54:48.651760 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:54:48.670580 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 10:54:48.672376 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:54:48.673601 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 10:54:48.674828 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 10:54:48.676081 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 10:54:48.677502 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 10:54:48.678703 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 10:54:48.680076 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 10:54:48.681318 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 10:54:48.681354 systemd[1]: Reached target paths.target - Path Units. Jan 29 10:54:48.682293 systemd[1]: Reached target timers.target - Timer Units. Jan 29 10:54:48.684327 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 10:54:48.686871 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 10:54:48.696586 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 10:54:48.698879 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 10:54:48.700526 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 10:54:48.701760 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 10:54:48.702771 systemd[1]: Reached target basic.target - Basic System. Jan 29 10:54:48.703790 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 10:54:48.703826 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 10:54:48.704694 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 10:54:48.706705 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 10:54:48.709835 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 10:54:48.710847 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 10:54:48.716656 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 10:54:48.716927 jq[1415]: false Jan 29 10:54:48.718066 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 10:54:48.719052 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 10:54:48.722863 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 10:54:48.727373 dbus-daemon[1414]: [system] SELinux support is enabled Jan 29 10:54:48.727952 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 10:54:48.730131 extend-filesystems[1416]: Found loop3 Jan 29 10:54:48.730113 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 10:54:48.734897 extend-filesystems[1416]: Found loop4 Jan 29 10:54:48.734897 extend-filesystems[1416]: Found loop5 Jan 29 10:54:48.734897 extend-filesystems[1416]: Found vda Jan 29 10:54:48.734897 extend-filesystems[1416]: Found vda1 Jan 29 10:54:48.734897 extend-filesystems[1416]: Found vda2 Jan 29 10:54:48.734897 extend-filesystems[1416]: Found vda3 Jan 29 10:54:48.734897 extend-filesystems[1416]: Found usr Jan 29 10:54:48.734897 extend-filesystems[1416]: Found vda4 Jan 29 10:54:48.734897 extend-filesystems[1416]: Found vda6 Jan 29 10:54:48.734897 extend-filesystems[1416]: Found vda7 Jan 29 10:54:48.734897 extend-filesystems[1416]: Found vda9 Jan 29 10:54:48.734897 extend-filesystems[1416]: Checking size of /dev/vda9 Jan 29 10:54:48.734260 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 10:54:48.752521 extend-filesystems[1416]: Resized partition /dev/vda9 Jan 29 10:54:48.741010 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 10:54:48.741424 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 10:54:48.742396 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 10:54:48.745983 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 10:54:48.749871 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 10:54:48.756859 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1341) Jan 29 10:54:48.758362 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 10:54:48.762022 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 10:54:48.762169 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 10:54:48.762400 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 10:54:48.762541 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 10:54:48.765149 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 10:54:48.771362 jq[1432]: true Jan 29 10:54:48.771453 extend-filesystems[1436]: resize2fs 1.47.1 (20-May-2024) Jan 29 10:54:48.765294 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 10:54:48.776589 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 10:54:48.776632 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 10:54:48.780218 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 10:54:48.780286 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 10:54:48.780304 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 10:54:48.787065 jq[1441]: true Jan 29 10:54:48.794192 (ntainerd)[1449]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 10:54:48.797047 update_engine[1430]: I20250129 10:54:48.796892 1430 main.cc:92] Flatcar Update Engine starting Jan 29 10:54:48.800526 systemd[1]: Started update-engine.service - Update Engine. Jan 29 10:54:48.800625 update_engine[1430]: I20250129 10:54:48.800546 1430 update_check_scheduler.cc:74] Next update check in 9m55s Jan 29 10:54:48.804193 tar[1439]: linux-arm64/helm Jan 29 10:54:48.804955 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 10:54:48.813730 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 10:54:48.826032 extend-filesystems[1436]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 10:54:48.826032 extend-filesystems[1436]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 10:54:48.826032 extend-filesystems[1436]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 10:54:48.836838 extend-filesystems[1416]: Resized filesystem in /dev/vda9 Jan 29 10:54:48.826778 systemd-logind[1428]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 10:54:48.826950 systemd-logind[1428]: New seat seat0. Jan 29 10:54:48.828431 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 10:54:48.832799 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 10:54:48.834968 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 10:54:48.852725 bash[1468]: Updated "/home/core/.ssh/authorized_keys" Jan 29 10:54:48.854151 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 10:54:48.856399 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 10:54:48.864891 locksmithd[1454]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 10:54:48.981549 containerd[1449]: time="2025-01-29T10:54:48.981468560Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 10:54:49.006308 containerd[1449]: time="2025-01-29T10:54:49.006271166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:54:49.007556 containerd[1449]: time="2025-01-29T10:54:49.007523402Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:54:49.007582 containerd[1449]: time="2025-01-29T10:54:49.007555981Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 10:54:49.007582 containerd[1449]: time="2025-01-29T10:54:49.007570712Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 10:54:49.007763 containerd[1449]: time="2025-01-29T10:54:49.007741480Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 10:54:49.007794 containerd[1449]: time="2025-01-29T10:54:49.007767513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 10:54:49.007840 containerd[1449]: time="2025-01-29T10:54:49.007823474Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:54:49.007861 containerd[1449]: time="2025-01-29T10:54:49.007838400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:54:49.007995 containerd[1449]: time="2025-01-29T10:54:49.007977797Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:54:49.008024 containerd[1449]: time="2025-01-29T10:54:49.007996620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 10:54:49.008024 containerd[1449]: time="2025-01-29T10:54:49.008012247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:54:49.008024 containerd[1449]: time="2025-01-29T10:54:49.008020781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 10:54:49.008099 containerd[1449]: time="2025-01-29T10:54:49.008084342Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:54:49.008284 containerd[1449]: time="2025-01-29T10:54:49.008265749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:54:49.008378 containerd[1449]: time="2025-01-29T10:54:49.008362552Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:54:49.008378 containerd[1449]: time="2025-01-29T10:54:49.008377166Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 10:54:49.008468 containerd[1449]: time="2025-01-29T10:54:49.008452534Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 10:54:49.008510 containerd[1449]: time="2025-01-29T10:54:49.008497273Z" level=info msg="metadata content store policy set" policy=shared Jan 29 10:54:49.011766 containerd[1449]: time="2025-01-29T10:54:49.011739299Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 10:54:49.011804 containerd[1449]: time="2025-01-29T10:54:49.011781855Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 10:54:49.011804 containerd[1449]: time="2025-01-29T10:54:49.011796118Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 10:54:49.011849 containerd[1449]: time="2025-01-29T10:54:49.011810069Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 10:54:49.011849 containerd[1449]: time="2025-01-29T10:54:49.011823553Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 10:54:49.011959 containerd[1449]: time="2025-01-29T10:54:49.011939568Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 10:54:49.013359 containerd[1449]: time="2025-01-29T10:54:49.013324108Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 10:54:49.013513 containerd[1449]: time="2025-01-29T10:54:49.013492772Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 10:54:49.013536 containerd[1449]: time="2025-01-29T10:54:49.013516739Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 10:54:49.013560 containerd[1449]: time="2025-01-29T10:54:49.013540784Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 10:54:49.013581 containerd[1449]: time="2025-01-29T10:54:49.013555982Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 10:54:49.013658 containerd[1449]: time="2025-01-29T10:54:49.013642263Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 10:54:49.013687 containerd[1449]: time="2025-01-29T10:54:49.013661164Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 10:54:49.013687 containerd[1449]: time="2025-01-29T10:54:49.013677648Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 10:54:49.013735 containerd[1449]: time="2025-01-29T10:54:49.013694951Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 10:54:49.013753 containerd[1449]: time="2025-01-29T10:54:49.013733843Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 10:54:49.013800 containerd[1449]: time="2025-01-29T10:54:49.013751419Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 10:54:49.013800 containerd[1449]: time="2025-01-29T10:54:49.013765448Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 10:54:49.013800 containerd[1449]: time="2025-01-29T10:54:49.013788285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 10:54:49.013846 containerd[1449]: time="2025-01-29T10:54:49.013806289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 10:54:49.013846 containerd[1449]: time="2025-01-29T10:54:49.013821839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 10:54:49.013846 containerd[1449]: time="2025-01-29T10:54:49.013837310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 10:54:49.013897 containerd[1449]: time="2025-01-29T10:54:49.013852937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 10:54:49.013897 containerd[1449]: time="2025-01-29T10:54:49.013868993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 10:54:49.013897 containerd[1449]: time="2025-01-29T10:54:49.013880840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 10:54:49.013943 containerd[1449]: time="2025-01-29T10:54:49.013895687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 10:54:49.013943 containerd[1449]: time="2025-01-29T10:54:49.013911665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 10:54:49.013943 containerd[1449]: time="2025-01-29T10:54:49.013929670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 10:54:49.013991 containerd[1449]: time="2025-01-29T10:54:49.013945063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 10:54:49.013991 containerd[1449]: time="2025-01-29T10:54:49.013963457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 10:54:49.013991 containerd[1449]: time="2025-01-29T10:54:49.013978266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 10:54:49.014037 containerd[1449]: time="2025-01-29T10:54:49.013995374Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 10:54:49.014037 containerd[1449]: time="2025-01-29T10:54:49.014019574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 10:54:49.014070 containerd[1449]: time="2025-01-29T10:54:49.014036137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 10:54:49.014070 containerd[1449]: time="2025-01-29T10:54:49.014050634Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 10:54:49.014727 containerd[1449]: time="2025-01-29T10:54:49.014227131Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 10:54:49.014727 containerd[1449]: time="2025-01-29T10:54:49.014252617Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 10:54:49.014727 containerd[1449]: time="2025-01-29T10:54:49.014266647Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 10:54:49.014727 containerd[1449]: time="2025-01-29T10:54:49.014281611Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 10:54:49.014727 containerd[1449]: time="2025-01-29T10:54:49.014291510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 10:54:49.014727 containerd[1449]: time="2025-01-29T10:54:49.014306280Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 10:54:49.014727 containerd[1449]: time="2025-01-29T10:54:49.014319569Z" level=info msg="NRI interface is disabled by configuration." Jan 29 10:54:49.014727 containerd[1449]: time="2025-01-29T10:54:49.014332195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 10:54:49.014868 containerd[1449]: time="2025-01-29T10:54:49.014668821Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 10:54:49.014868 containerd[1449]: time="2025-01-29T10:54:49.014738228Z" level=info msg="Connect containerd service" Jan 29 10:54:49.014868 containerd[1449]: time="2025-01-29T10:54:49.014773925Z" level=info msg="using legacy CRI server" Jan 29 10:54:49.014868 containerd[1449]: time="2025-01-29T10:54:49.014785382Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 10:54:49.015035 containerd[1449]: time="2025-01-29T10:54:49.015014723Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 10:54:49.015655 containerd[1449]: time="2025-01-29T10:54:49.015629325Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 10:54:49.016184 containerd[1449]: time="2025-01-29T10:54:49.016160804Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 10:54:49.016224 containerd[1449]: time="2025-01-29T10:54:49.016210179Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 10:54:49.016257 containerd[1449]: time="2025-01-29T10:54:49.016213570Z" level=info msg="Start subscribing containerd event" Jan 29 10:54:49.016275 containerd[1449]: time="2025-01-29T10:54:49.016265673Z" level=info msg="Start recovering state" Jan 29 10:54:49.016947 containerd[1449]: time="2025-01-29T10:54:49.016324323Z" level=info msg="Start event monitor" Jan 29 10:54:49.016947 containerd[1449]: time="2025-01-29T10:54:49.016338548Z" level=info msg="Start snapshots syncer" Jan 29 10:54:49.016947 containerd[1449]: time="2025-01-29T10:54:49.016354097Z" level=info msg="Start cni network conf syncer for default" Jan 29 10:54:49.016947 containerd[1449]: time="2025-01-29T10:54:49.016362514Z" level=info msg="Start streaming server" Jan 29 10:54:49.016947 containerd[1449]: time="2025-01-29T10:54:49.016477711Z" level=info msg="containerd successfully booted in 0.035786s" Jan 29 10:54:49.016567 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 10:54:49.108416 sshd_keygen[1437]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 10:54:49.125821 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 10:54:49.134937 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 10:54:49.139435 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 10:54:49.139613 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 10:54:49.144497 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 10:54:49.150764 tar[1439]: linux-arm64/LICENSE Jan 29 10:54:49.150838 tar[1439]: linux-arm64/README.md Jan 29 10:54:49.157811 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 10:54:49.159307 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 10:54:49.162332 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 10:54:49.165349 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 10:54:49.166599 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 10:54:49.844935 systemd-networkd[1380]: eth0: Gained IPv6LL Jan 29 10:54:49.847115 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 10:54:49.848899 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 10:54:49.871076 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 10:54:49.873319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:54:49.875369 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 10:54:49.889004 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 10:54:49.889226 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 10:54:49.891706 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 10:54:49.898036 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 10:54:50.333093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:54:50.334602 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 10:54:50.338036 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:54:50.340844 systemd[1]: Startup finished in 576ms (kernel) + 8.345s (initrd) + 3.316s (userspace) = 12.239s. Jan 29 10:54:50.344033 agetty[1502]: failed to open credentials directory Jan 29 10:54:50.344594 agetty[1503]: failed to open credentials directory Jan 29 10:54:50.814642 kubelet[1526]: E0129 10:54:50.814501 1526 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:54:50.816786 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:54:50.816922 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:54:51.289541 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 10:54:51.290572 systemd[1]: Started sshd@0-10.0.0.63:22-10.0.0.1:40046.service - OpenSSH per-connection server daemon (10.0.0.1:40046). Jan 29 10:54:51.354452 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 40046 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:54:51.356596 sshd-session[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:54:51.364747 systemd-logind[1428]: New session 1 of user core. Jan 29 10:54:51.365745 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 10:54:51.371944 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 10:54:51.382283 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 10:54:51.386393 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 10:54:51.394242 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 10:54:51.462392 systemd[1544]: Queued start job for default target default.target. Jan 29 10:54:51.469577 systemd[1544]: Created slice app.slice - User Application Slice. Jan 29 10:54:51.469618 systemd[1544]: Reached target paths.target - Paths. Jan 29 10:54:51.469630 systemd[1544]: Reached target timers.target - Timers. Jan 29 10:54:51.470846 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 10:54:51.481670 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 10:54:51.481768 systemd[1544]: Reached target sockets.target - Sockets. Jan 29 10:54:51.481780 systemd[1544]: Reached target basic.target - Basic System. Jan 29 10:54:51.481814 systemd[1544]: Reached target default.target - Main User Target. Jan 29 10:54:51.481839 systemd[1544]: Startup finished in 82ms. Jan 29 10:54:51.482066 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 10:54:51.483232 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 10:54:51.550181 systemd[1]: Started sshd@1-10.0.0.63:22-10.0.0.1:40054.service - OpenSSH per-connection server daemon (10.0.0.1:40054). Jan 29 10:54:51.593587 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 40054 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:54:51.594877 sshd-session[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:54:51.598228 systemd-logind[1428]: New session 2 of user core. Jan 29 10:54:51.605858 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 10:54:51.655964 sshd[1557]: Connection closed by 10.0.0.1 port 40054 Jan 29 10:54:51.656407 sshd-session[1555]: pam_unix(sshd:session): session closed for user core Jan 29 10:54:51.673934 systemd[1]: sshd@1-10.0.0.63:22-10.0.0.1:40054.service: Deactivated successfully. Jan 29 10:54:51.675257 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 10:54:51.676373 systemd-logind[1428]: Session 2 logged out. Waiting for processes to exit. Jan 29 10:54:51.677349 systemd[1]: Started sshd@2-10.0.0.63:22-10.0.0.1:40070.service - OpenSSH per-connection server daemon (10.0.0.1:40070). Jan 29 10:54:51.678071 systemd-logind[1428]: Removed session 2. Jan 29 10:54:51.719233 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 40070 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:54:51.720262 sshd-session[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:54:51.723833 systemd-logind[1428]: New session 3 of user core. Jan 29 10:54:51.729912 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 10:54:51.776610 sshd[1564]: Connection closed by 10.0.0.1 port 40070 Jan 29 10:54:51.776985 sshd-session[1562]: pam_unix(sshd:session): session closed for user core Jan 29 10:54:51.796315 systemd[1]: sshd@2-10.0.0.63:22-10.0.0.1:40070.service: Deactivated successfully. Jan 29 10:54:51.798239 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 10:54:51.800988 systemd-logind[1428]: Session 3 logged out. Waiting for processes to exit. Jan 29 10:54:51.802070 systemd[1]: Started sshd@3-10.0.0.63:22-10.0.0.1:40080.service - OpenSSH per-connection server daemon (10.0.0.1:40080). Jan 29 10:54:51.802826 systemd-logind[1428]: Removed session 3. Jan 29 10:54:51.845009 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 40080 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:54:51.846140 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:54:51.849688 systemd-logind[1428]: New session 4 of user core. Jan 29 10:54:51.859854 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 10:54:51.909732 sshd[1571]: Connection closed by 10.0.0.1 port 40080 Jan 29 10:54:51.910117 sshd-session[1569]: pam_unix(sshd:session): session closed for user core Jan 29 10:54:51.922925 systemd[1]: sshd@3-10.0.0.63:22-10.0.0.1:40080.service: Deactivated successfully. Jan 29 10:54:51.925998 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 10:54:51.928004 systemd-logind[1428]: Session 4 logged out. Waiting for processes to exit. Jan 29 10:54:51.928400 systemd[1]: Started sshd@4-10.0.0.63:22-10.0.0.1:40092.service - OpenSSH per-connection server daemon (10.0.0.1:40092). Jan 29 10:54:51.929509 systemd-logind[1428]: Removed session 4. Jan 29 10:54:51.970271 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 40092 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:54:51.971363 sshd-session[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:54:51.974712 systemd-logind[1428]: New session 5 of user core. Jan 29 10:54:51.980852 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 10:54:52.043952 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 10:54:52.044237 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:54:52.362930 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 10:54:52.363057 (dockerd)[1600]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 10:54:52.594967 dockerd[1600]: time="2025-01-29T10:54:52.594773956Z" level=info msg="Starting up" Jan 29 10:54:52.860973 dockerd[1600]: time="2025-01-29T10:54:52.860710622Z" level=info msg="Loading containers: start." Jan 29 10:54:52.995742 kernel: Initializing XFRM netlink socket Jan 29 10:54:53.065544 systemd-networkd[1380]: docker0: Link UP Jan 29 10:54:53.102155 dockerd[1600]: time="2025-01-29T10:54:53.102056762Z" level=info msg="Loading containers: done." Jan 29 10:54:53.113527 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3775437753-merged.mount: Deactivated successfully. Jan 29 10:54:53.115359 dockerd[1600]: time="2025-01-29T10:54:53.115306545Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 10:54:53.115445 dockerd[1600]: time="2025-01-29T10:54:53.115411694Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 10:54:53.115655 dockerd[1600]: time="2025-01-29T10:54:53.115636175Z" level=info msg="Daemon has completed initialization" Jan 29 10:54:53.142769 dockerd[1600]: time="2025-01-29T10:54:53.142710929Z" level=info msg="API listen on /run/docker.sock" Jan 29 10:54:53.142946 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 10:54:53.964998 containerd[1449]: time="2025-01-29T10:54:53.964902871Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 10:54:54.707347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount496110504.mount: Deactivated successfully. Jan 29 10:54:56.464794 containerd[1449]: time="2025-01-29T10:54:56.464710546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:56.478039 containerd[1449]: time="2025-01-29T10:54:56.477945523Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864937" Jan 29 10:54:56.491679 containerd[1449]: time="2025-01-29T10:54:56.491622901Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:56.555452 containerd[1449]: time="2025-01-29T10:54:56.555399630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:56.556536 containerd[1449]: time="2025-01-29T10:54:56.556505379Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.591557896s" Jan 29 10:54:56.556588 containerd[1449]: time="2025-01-29T10:54:56.556545450Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 29 10:54:56.574846 containerd[1449]: time="2025-01-29T10:54:56.574813552Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 10:54:58.814115 containerd[1449]: time="2025-01-29T10:54:58.813974384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:58.814897 containerd[1449]: time="2025-01-29T10:54:58.814673571Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901563" Jan 29 10:54:58.815610 containerd[1449]: time="2025-01-29T10:54:58.815429634Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:58.818166 containerd[1449]: time="2025-01-29T10:54:58.818116598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:58.819188 containerd[1449]: time="2025-01-29T10:54:58.819156408Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 2.24430835s" Jan 29 10:54:58.819188 containerd[1449]: time="2025-01-29T10:54:58.819187724Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 29 10:54:58.837999 containerd[1449]: time="2025-01-29T10:54:58.837959042Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 10:55:00.116673 containerd[1449]: time="2025-01-29T10:55:00.116621513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:00.117291 containerd[1449]: time="2025-01-29T10:55:00.117232988Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164340" Jan 29 10:55:00.117894 containerd[1449]: time="2025-01-29T10:55:00.117863510Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:00.120612 containerd[1449]: time="2025-01-29T10:55:00.120575545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:00.121714 containerd[1449]: time="2025-01-29T10:55:00.121684376Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.283688208s" Jan 29 10:55:00.121774 containerd[1449]: time="2025-01-29T10:55:00.121733325Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 29 10:55:00.139224 containerd[1449]: time="2025-01-29T10:55:00.139194132Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 10:55:01.067192 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 10:55:01.078916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:55:01.167694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:55:01.171911 (kubelet)[1896]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:55:01.224588 kubelet[1896]: E0129 10:55:01.224489 1896 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:55:01.227831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:55:01.227969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:55:01.513576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount471791271.mount: Deactivated successfully. Jan 29 10:55:01.709132 containerd[1449]: time="2025-01-29T10:55:01.708966237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:01.709939 containerd[1449]: time="2025-01-29T10:55:01.709739567Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 29 10:55:01.710609 containerd[1449]: time="2025-01-29T10:55:01.710579271Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:01.712641 containerd[1449]: time="2025-01-29T10:55:01.712580203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:01.713243 containerd[1449]: time="2025-01-29T10:55:01.713164876Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.573939325s" Jan 29 10:55:01.713243 containerd[1449]: time="2025-01-29T10:55:01.713197983Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 29 10:55:01.730944 containerd[1449]: time="2025-01-29T10:55:01.730917400Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 10:55:02.453938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4081239802.mount: Deactivated successfully. Jan 29 10:55:03.612344 containerd[1449]: time="2025-01-29T10:55:03.612276027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:03.619886 containerd[1449]: time="2025-01-29T10:55:03.619537329Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 29 10:55:03.621945 containerd[1449]: time="2025-01-29T10:55:03.621906886Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:03.630122 containerd[1449]: time="2025-01-29T10:55:03.630048760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:03.631852 containerd[1449]: time="2025-01-29T10:55:03.631801774Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.900852968s" Jan 29 10:55:03.631852 containerd[1449]: time="2025-01-29T10:55:03.631833009Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 10:55:03.657891 containerd[1449]: time="2025-01-29T10:55:03.657791441Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 10:55:04.222147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1420767474.mount: Deactivated successfully. Jan 29 10:55:04.228543 containerd[1449]: time="2025-01-29T10:55:04.228495065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:04.229368 containerd[1449]: time="2025-01-29T10:55:04.229315605Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 29 10:55:04.230643 containerd[1449]: time="2025-01-29T10:55:04.230596780Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:04.232858 containerd[1449]: time="2025-01-29T10:55:04.232799064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:04.235782 containerd[1449]: time="2025-01-29T10:55:04.235736108Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 577.892382ms" Jan 29 10:55:04.235782 containerd[1449]: time="2025-01-29T10:55:04.235782187Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 29 10:55:04.257211 containerd[1449]: time="2025-01-29T10:55:04.257159003Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 10:55:04.849096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3392104344.mount: Deactivated successfully. Jan 29 10:55:07.807344 containerd[1449]: time="2025-01-29T10:55:07.807279512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:07.807808 containerd[1449]: time="2025-01-29T10:55:07.807766417Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jan 29 10:55:07.808785 containerd[1449]: time="2025-01-29T10:55:07.808756468Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:07.811647 containerd[1449]: time="2025-01-29T10:55:07.811613245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:07.812860 containerd[1449]: time="2025-01-29T10:55:07.812831604Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.555630817s" Jan 29 10:55:07.812895 containerd[1449]: time="2025-01-29T10:55:07.812861694Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 29 10:55:11.479254 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 10:55:11.488883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:55:11.580665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:55:11.584871 (kubelet)[2110]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:55:11.648598 kubelet[2110]: E0129 10:55:11.648547 2110 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:55:11.651164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:55:11.651305 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:55:12.704177 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:55:12.713930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:55:12.736916 systemd[1]: Reloading requested from client PID 2126 ('systemctl') (unit session-5.scope)... Jan 29 10:55:12.736932 systemd[1]: Reloading... Jan 29 10:55:12.800746 zram_generator::config[2165]: No configuration found. Jan 29 10:55:13.042973 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:55:13.095500 systemd[1]: Reloading finished in 358 ms. Jan 29 10:55:13.144347 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:55:13.147995 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 10:55:13.148319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:55:13.164222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:55:13.254327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:55:13.258180 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 10:55:13.297986 kubelet[2213]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:55:13.297986 kubelet[2213]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 10:55:13.297986 kubelet[2213]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:55:13.298778 kubelet[2213]: I0129 10:55:13.298735 2213 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 10:55:14.593220 kubelet[2213]: I0129 10:55:14.593173 2213 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 10:55:14.593220 kubelet[2213]: I0129 10:55:14.593206 2213 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 10:55:14.593560 kubelet[2213]: I0129 10:55:14.593399 2213 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 10:55:14.618360 kubelet[2213]: E0129 10:55:14.618329 2213 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.63:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.63:6443: connect: connection refused Jan 29 10:55:14.619035 kubelet[2213]: I0129 10:55:14.619005 2213 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 10:55:14.625833 kubelet[2213]: I0129 10:55:14.625808 2213 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 10:55:14.626279 kubelet[2213]: I0129 10:55:14.626246 2213 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 10:55:14.626445 kubelet[2213]: I0129 10:55:14.626272 2213 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 10:55:14.626522 kubelet[2213]: I0129 10:55:14.626508 2213 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 10:55:14.626522 kubelet[2213]: I0129 10:55:14.626517 2213 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 10:55:14.626808 kubelet[2213]: I0129 10:55:14.626794 2213 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:55:14.627808 kubelet[2213]: I0129 10:55:14.627785 2213 kubelet.go:400] "Attempting to sync node with API server" Jan 29 10:55:14.627839 kubelet[2213]: I0129 10:55:14.627807 2213 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 10:55:14.628995 kubelet[2213]: I0129 10:55:14.627944 2213 kubelet.go:312] "Adding apiserver pod source" Jan 29 10:55:14.628995 kubelet[2213]: I0129 10:55:14.628087 2213 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 10:55:14.629661 kubelet[2213]: W0129 10:55:14.629617 2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.63:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 29 10:55:14.629798 kubelet[2213]: E0129 10:55:14.629784 2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.63:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 29 10:55:14.629868 kubelet[2213]: I0129 10:55:14.629710 2213 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 10:55:14.630440 kubelet[2213]: I0129 10:55:14.630427 2213 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 10:55:14.630732 kubelet[2213]: W0129 10:55:14.630709 2213 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 10:55:14.630964 kubelet[2213]: W0129 10:55:14.630912 2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 29 10:55:14.630964 kubelet[2213]: E0129 10:55:14.630963 2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 29 10:55:14.631626 kubelet[2213]: I0129 10:55:14.631607 2213 server.go:1264] "Started kubelet" Jan 29 10:55:14.635045 kubelet[2213]: I0129 10:55:14.635008 2213 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 10:55:14.639829 kubelet[2213]: I0129 10:55:14.639760 2213 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 10:55:14.640171 kubelet[2213]: I0129 10:55:14.640145 2213 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 10:55:14.640285 kubelet[2213]: I0129 10:55:14.640263 2213 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 10:55:14.640285 kubelet[2213]: I0129 10:55:14.640270 2213 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 10:55:14.640896 kubelet[2213]: I0129 10:55:14.640868 2213 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 10:55:14.641018 kubelet[2213]: I0129 10:55:14.641003 2213 reconciler.go:26] "Reconciler: start to sync state" Jan 29 10:55:14.641379 kubelet[2213]: W0129 10:55:14.641345 2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 29 10:55:14.641413 kubelet[2213]: E0129 10:55:14.641390 2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 29 10:55:14.641465 kubelet[2213]: E0129 10:55:14.641442 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="200ms" Jan 29 10:55:14.642114 kubelet[2213]: I0129 10:55:14.642088 2213 server.go:455] "Adding debug handlers to kubelet server" Jan 29 10:55:14.642572 kubelet[2213]: E0129 10:55:14.642313 2213 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.63:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.63:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f24804ba1589c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 10:55:14.631579804 +0000 UTC m=+1.370553674,LastTimestamp:2025-01-29 10:55:14.631579804 +0000 UTC m=+1.370553674,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 10:55:14.643375 kubelet[2213]: I0129 10:55:14.643346 2213 factory.go:221] Registration of the systemd container factory successfully Jan 29 10:55:14.643463 kubelet[2213]: I0129 10:55:14.643441 2213 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 10:55:14.644504 kubelet[2213]: I0129 10:55:14.644457 2213 factory.go:221] Registration of the containerd container factory successfully Jan 29 10:55:14.654551 kubelet[2213]: I0129 10:55:14.654514 2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 10:55:14.655561 kubelet[2213]: I0129 10:55:14.655544 2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 10:55:14.655834 kubelet[2213]: I0129 10:55:14.655822 2213 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 10:55:14.655900 kubelet[2213]: I0129 10:55:14.655891 2213 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 10:55:14.655989 kubelet[2213]: E0129 10:55:14.655970 2213 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 10:55:14.659853 kubelet[2213]: W0129 10:55:14.659811 2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 29 10:55:14.659958 kubelet[2213]: E0129 10:55:14.659946 2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 29 10:55:14.660099 kubelet[2213]: I0129 10:55:14.660081 2213 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 10:55:14.660099 kubelet[2213]: I0129 10:55:14.660095 2213 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 10:55:14.660156 kubelet[2213]: I0129 10:55:14.660111 2213 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:55:14.728968 kubelet[2213]: I0129 10:55:14.728925 2213 policy_none.go:49] "None policy: Start" Jan 29 10:55:14.729913 kubelet[2213]: I0129 10:55:14.729868 2213 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 10:55:14.729913 kubelet[2213]: I0129 10:55:14.729898 2213 state_mem.go:35] "Initializing new in-memory state store" Jan 29 10:55:14.734732 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 10:55:14.742227 kubelet[2213]: I0129 10:55:14.742201 2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 10:55:14.742532 kubelet[2213]: E0129 10:55:14.742508 2213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Jan 29 10:55:14.752188 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 10:55:14.754612 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 10:55:14.756148 kubelet[2213]: E0129 10:55:14.756129 2213 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 10:55:14.764838 kubelet[2213]: I0129 10:55:14.764436 2213 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 10:55:14.764914 kubelet[2213]: I0129 10:55:14.764866 2213 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 10:55:14.765132 kubelet[2213]: I0129 10:55:14.764970 2213 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 10:55:14.766253 kubelet[2213]: E0129 10:55:14.766223 2213 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 10:55:14.842824 kubelet[2213]: E0129 10:55:14.842781 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="400ms" Jan 29 10:55:14.943991 kubelet[2213]: I0129 10:55:14.943902 2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 10:55:14.944227 kubelet[2213]: E0129 10:55:14.944188 2213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Jan 29 10:55:14.956310 kubelet[2213]: I0129 10:55:14.956241 2213 topology_manager.go:215] "Topology Admit Handler" podUID="4fcc98cd0ce60c8d3e4cbe2ae59a1676" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 10:55:14.957185 kubelet[2213]: I0129 10:55:14.957151 2213 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 10:55:14.958140 kubelet[2213]: I0129 10:55:14.958012 2213 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 10:55:14.963269 systemd[1]: Created slice kubepods-burstable-pod4fcc98cd0ce60c8d3e4cbe2ae59a1676.slice - libcontainer container kubepods-burstable-pod4fcc98cd0ce60c8d3e4cbe2ae59a1676.slice. Jan 29 10:55:14.976606 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 29 10:55:14.996739 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 29 10:55:15.142599 kubelet[2213]: I0129 10:55:15.142560 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4fcc98cd0ce60c8d3e4cbe2ae59a1676-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4fcc98cd0ce60c8d3e4cbe2ae59a1676\") " pod="kube-system/kube-apiserver-localhost" Jan 29 10:55:15.142599 kubelet[2213]: I0129 10:55:15.142596 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:55:15.142599 kubelet[2213]: I0129 10:55:15.142616 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:55:15.142800 kubelet[2213]: I0129 10:55:15.142632 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:55:15.142800 kubelet[2213]: I0129 10:55:15.142649 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:55:15.142800 kubelet[2213]: I0129 10:55:15.142665 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4fcc98cd0ce60c8d3e4cbe2ae59a1676-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4fcc98cd0ce60c8d3e4cbe2ae59a1676\") " pod="kube-system/kube-apiserver-localhost" Jan 29 10:55:15.142800 kubelet[2213]: I0129 10:55:15.142679 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:55:15.142800 kubelet[2213]: I0129 10:55:15.142697 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 10:55:15.142900 kubelet[2213]: I0129 10:55:15.142711 2213 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4fcc98cd0ce60c8d3e4cbe2ae59a1676-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4fcc98cd0ce60c8d3e4cbe2ae59a1676\") " pod="kube-system/kube-apiserver-localhost" Jan 29 10:55:15.243742 kubelet[2213]: E0129 10:55:15.243593 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="800ms" Jan 29 10:55:15.275003 kubelet[2213]: E0129 10:55:15.274923 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:15.275620 containerd[1449]: time="2025-01-29T10:55:15.275582804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4fcc98cd0ce60c8d3e4cbe2ae59a1676,Namespace:kube-system,Attempt:0,}" Jan 29 10:55:15.295909 kubelet[2213]: E0129 10:55:15.295799 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:15.296204 containerd[1449]: time="2025-01-29T10:55:15.296159700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 29 10:55:15.298965 kubelet[2213]: E0129 10:55:15.298909 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:15.299232 containerd[1449]: time="2025-01-29T10:55:15.299206102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 29 10:55:15.345575 kubelet[2213]: I0129 10:55:15.345548 2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 10:55:15.345846 kubelet[2213]: E0129 10:55:15.345817 2213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Jan 29 10:55:15.525753 kubelet[2213]: W0129 10:55:15.525613 2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 29 10:55:15.525753 kubelet[2213]: E0129 10:55:15.525654 2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 29 10:55:15.705604 kubelet[2213]: W0129 10:55:15.705544 2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 29 10:55:15.705604 kubelet[2213]: E0129 10:55:15.705582 2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 29 10:55:15.842815 kubelet[2213]: W0129 10:55:15.842666 2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.63:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 29 10:55:15.842815 kubelet[2213]: E0129 10:55:15.842753 2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.63:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 29 10:55:15.901318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4078286665.mount: Deactivated successfully. Jan 29 10:55:15.907214 containerd[1449]: time="2025-01-29T10:55:15.907165735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:55:15.909429 containerd[1449]: time="2025-01-29T10:55:15.909386957Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 10:55:15.911210 containerd[1449]: time="2025-01-29T10:55:15.911180123Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:55:15.912695 containerd[1449]: time="2025-01-29T10:55:15.912646629Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:55:15.913302 containerd[1449]: time="2025-01-29T10:55:15.913268931Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 10:55:15.914021 containerd[1449]: time="2025-01-29T10:55:15.913998148Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:55:15.914640 containerd[1449]: time="2025-01-29T10:55:15.914579203Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 10:55:15.916111 containerd[1449]: time="2025-01-29T10:55:15.916083799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:55:15.918328 containerd[1449]: time="2025-01-29T10:55:15.918303943Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 642.640003ms" Jan 29 10:55:15.919740 containerd[1449]: time="2025-01-29T10:55:15.919574606Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 620.189127ms" Jan 29 10:55:15.922449 containerd[1449]: time="2025-01-29T10:55:15.922319050Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 626.092164ms" Jan 29 10:55:16.044493 kubelet[2213]: E0129 10:55:16.044200 2213 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="1.6s" Jan 29 10:55:16.050857 containerd[1449]: time="2025-01-29T10:55:16.050574733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:55:16.050857 containerd[1449]: time="2025-01-29T10:55:16.050637170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:55:16.050857 containerd[1449]: time="2025-01-29T10:55:16.050648042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:55:16.050857 containerd[1449]: time="2025-01-29T10:55:16.050762162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:55:16.051329 containerd[1449]: time="2025-01-29T10:55:16.051237190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:55:16.051713 containerd[1449]: time="2025-01-29T10:55:16.051319452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:55:16.051713 containerd[1449]: time="2025-01-29T10:55:16.051676322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:55:16.051884 containerd[1449]: time="2025-01-29T10:55:16.051822540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:55:16.052712 containerd[1449]: time="2025-01-29T10:55:16.052640008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:55:16.052712 containerd[1449]: time="2025-01-29T10:55:16.052685056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:55:16.052796 containerd[1449]: time="2025-01-29T10:55:16.052707121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:55:16.052827 containerd[1449]: time="2025-01-29T10:55:16.052790382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:55:16.073881 systemd[1]: Started cri-containerd-51d418619a3e6386f900cba7c91e4f3c6d2d7fec0c549e3714d58336eba764f6.scope - libcontainer container 51d418619a3e6386f900cba7c91e4f3c6d2d7fec0c549e3714d58336eba764f6. Jan 29 10:55:16.077712 systemd[1]: Started cri-containerd-70ee4868384e369ed0975ab537ffee2c13935646198423f3b1acdbb9e8130e87.scope - libcontainer container 70ee4868384e369ed0975ab537ffee2c13935646198423f3b1acdbb9e8130e87. Jan 29 10:55:16.079191 systemd[1]: Started cri-containerd-99ae1f678d899316e6d58b14be606ef2cf36ac73e24a7de576eff95f0e402415.scope - libcontainer container 99ae1f678d899316e6d58b14be606ef2cf36ac73e24a7de576eff95f0e402415. Jan 29 10:55:16.113036 containerd[1449]: time="2025-01-29T10:55:16.112924046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4fcc98cd0ce60c8d3e4cbe2ae59a1676,Namespace:kube-system,Attempt:0,} returns sandbox id \"51d418619a3e6386f900cba7c91e4f3c6d2d7fec0c549e3714d58336eba764f6\"" Jan 29 10:55:16.116479 containerd[1449]: time="2025-01-29T10:55:16.116411445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"70ee4868384e369ed0975ab537ffee2c13935646198423f3b1acdbb9e8130e87\"" Jan 29 10:55:16.117305 kubelet[2213]: E0129 10:55:16.117267 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:16.117791 kubelet[2213]: E0129 10:55:16.117528 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:16.119789 containerd[1449]: time="2025-01-29T10:55:16.119532340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"99ae1f678d899316e6d58b14be606ef2cf36ac73e24a7de576eff95f0e402415\"" Jan 29 10:55:16.122967 kubelet[2213]: E0129 10:55:16.122476 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:16.123025 containerd[1449]: time="2025-01-29T10:55:16.122863808Z" level=info msg="CreateContainer within sandbox \"70ee4868384e369ed0975ab537ffee2c13935646198423f3b1acdbb9e8130e87\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 10:55:16.123455 containerd[1449]: time="2025-01-29T10:55:16.123397834Z" level=info msg="CreateContainer within sandbox \"51d418619a3e6386f900cba7c91e4f3c6d2d7fec0c549e3714d58336eba764f6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 10:55:16.125918 containerd[1449]: time="2025-01-29T10:55:16.125872741Z" level=info msg="CreateContainer within sandbox \"99ae1f678d899316e6d58b14be606ef2cf36ac73e24a7de576eff95f0e402415\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 10:55:16.137072 containerd[1449]: time="2025-01-29T10:55:16.137036167Z" level=info msg="CreateContainer within sandbox \"70ee4868384e369ed0975ab537ffee2c13935646198423f3b1acdbb9e8130e87\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c0ec25d837d011c4f4c7acb0faff09a5095bf3733bc9705832f5437f1e96087c\"" Jan 29 10:55:16.137690 containerd[1449]: time="2025-01-29T10:55:16.137665086Z" level=info msg="StartContainer for \"c0ec25d837d011c4f4c7acb0faff09a5095bf3733bc9705832f5437f1e96087c\"" Jan 29 10:55:16.144111 containerd[1449]: time="2025-01-29T10:55:16.144078277Z" level=info msg="CreateContainer within sandbox \"51d418619a3e6386f900cba7c91e4f3c6d2d7fec0c549e3714d58336eba764f6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d4f8b97d146d7e0db7184f78541149dcf293c684fcbfe7298ed94166e4bdc725\"" Jan 29 10:55:16.146171 containerd[1449]: time="2025-01-29T10:55:16.145258171Z" level=info msg="StartContainer for \"d4f8b97d146d7e0db7184f78541149dcf293c684fcbfe7298ed94166e4bdc725\"" Jan 29 10:55:16.146171 containerd[1449]: time="2025-01-29T10:55:16.145273120Z" level=info msg="CreateContainer within sandbox \"99ae1f678d899316e6d58b14be606ef2cf36ac73e24a7de576eff95f0e402415\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aef2cc0c00726dc2309cde174c88ab44bc42ddf5cdf05a62df46857a02891396\"" Jan 29 10:55:16.146171 containerd[1449]: time="2025-01-29T10:55:16.146076358Z" level=info msg="StartContainer for \"aef2cc0c00726dc2309cde174c88ab44bc42ddf5cdf05a62df46857a02891396\"" Jan 29 10:55:16.147939 kubelet[2213]: I0129 10:55:16.147903 2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 10:55:16.148257 kubelet[2213]: E0129 10:55:16.148220 2213 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Jan 29 10:55:16.163910 systemd[1]: Started cri-containerd-c0ec25d837d011c4f4c7acb0faff09a5095bf3733bc9705832f5437f1e96087c.scope - libcontainer container c0ec25d837d011c4f4c7acb0faff09a5095bf3733bc9705832f5437f1e96087c. Jan 29 10:55:16.174873 systemd[1]: Started cri-containerd-aef2cc0c00726dc2309cde174c88ab44bc42ddf5cdf05a62df46857a02891396.scope - libcontainer container aef2cc0c00726dc2309cde174c88ab44bc42ddf5cdf05a62df46857a02891396. Jan 29 10:55:16.176340 systemd[1]: Started cri-containerd-d4f8b97d146d7e0db7184f78541149dcf293c684fcbfe7298ed94166e4bdc725.scope - libcontainer container d4f8b97d146d7e0db7184f78541149dcf293c684fcbfe7298ed94166e4bdc725. Jan 29 10:55:16.203790 containerd[1449]: time="2025-01-29T10:55:16.203746786Z" level=info msg="StartContainer for \"c0ec25d837d011c4f4c7acb0faff09a5095bf3733bc9705832f5437f1e96087c\" returns successfully" Jan 29 10:55:16.215065 kubelet[2213]: W0129 10:55:16.214893 2213 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 29 10:55:16.215065 kubelet[2213]: E0129 10:55:16.214960 2213 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 29 10:55:16.224421 containerd[1449]: time="2025-01-29T10:55:16.223675475Z" level=info msg="StartContainer for \"aef2cc0c00726dc2309cde174c88ab44bc42ddf5cdf05a62df46857a02891396\" returns successfully" Jan 29 10:55:16.241786 containerd[1449]: time="2025-01-29T10:55:16.240296240Z" level=info msg="StartContainer for \"d4f8b97d146d7e0db7184f78541149dcf293c684fcbfe7298ed94166e4bdc725\" returns successfully" Jan 29 10:55:16.671361 kubelet[2213]: E0129 10:55:16.671333 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:16.673827 kubelet[2213]: E0129 10:55:16.673803 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:16.674316 kubelet[2213]: E0129 10:55:16.674295 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:17.675472 kubelet[2213]: E0129 10:55:17.675392 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:17.731004 kubelet[2213]: E0129 10:55:17.730954 2213 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 10:55:17.749264 kubelet[2213]: I0129 10:55:17.749238 2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 10:55:17.860482 kubelet[2213]: I0129 10:55:17.860436 2213 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 10:55:17.867143 kubelet[2213]: E0129 10:55:17.867098 2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 10:55:17.967497 kubelet[2213]: E0129 10:55:17.967394 2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 10:55:18.631460 kubelet[2213]: I0129 10:55:18.631384 2213 apiserver.go:52] "Watching apiserver" Jan 29 10:55:18.641740 kubelet[2213]: I0129 10:55:18.641695 2213 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 10:55:19.708934 systemd[1]: Reloading requested from client PID 2498 ('systemctl') (unit session-5.scope)... Jan 29 10:55:19.709222 systemd[1]: Reloading... Jan 29 10:55:19.777852 zram_generator::config[2540]: No configuration found. Jan 29 10:55:19.862790 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:55:19.923947 systemd[1]: Reloading finished in 214 ms. Jan 29 10:55:19.956365 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:55:19.973539 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 10:55:19.973807 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:55:19.973860 systemd[1]: kubelet.service: Consumed 1.732s CPU time, 117.3M memory peak, 0B memory swap peak. Jan 29 10:55:19.983122 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:55:20.072331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:55:20.077540 (kubelet)[2579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 10:55:20.115988 kubelet[2579]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:55:20.115988 kubelet[2579]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 10:55:20.115988 kubelet[2579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:55:20.116341 kubelet[2579]: I0129 10:55:20.115990 2579 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 10:55:20.121408 kubelet[2579]: I0129 10:55:20.121325 2579 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 10:55:20.121408 kubelet[2579]: I0129 10:55:20.121351 2579 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 10:55:20.121685 kubelet[2579]: I0129 10:55:20.121665 2579 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 10:55:20.123102 kubelet[2579]: I0129 10:55:20.123022 2579 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 10:55:20.125452 kubelet[2579]: I0129 10:55:20.125428 2579 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 10:55:20.133265 kubelet[2579]: I0129 10:55:20.133227 2579 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 10:55:20.133429 kubelet[2579]: I0129 10:55:20.133403 2579 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 10:55:20.133669 kubelet[2579]: I0129 10:55:20.133429 2579 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 10:55:20.133748 kubelet[2579]: I0129 10:55:20.133681 2579 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 10:55:20.133748 kubelet[2579]: I0129 10:55:20.133690 2579 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 10:55:20.133795 kubelet[2579]: I0129 10:55:20.133763 2579 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:55:20.134509 kubelet[2579]: I0129 10:55:20.134161 2579 kubelet.go:400] "Attempting to sync node with API server" Jan 29 10:55:20.134509 kubelet[2579]: I0129 10:55:20.134185 2579 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 10:55:20.134509 kubelet[2579]: I0129 10:55:20.134212 2579 kubelet.go:312] "Adding apiserver pod source" Jan 29 10:55:20.134509 kubelet[2579]: I0129 10:55:20.134229 2579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 10:55:20.135871 kubelet[2579]: I0129 10:55:20.135845 2579 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 10:55:20.136023 kubelet[2579]: I0129 10:55:20.136002 2579 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 10:55:20.138752 kubelet[2579]: I0129 10:55:20.137935 2579 server.go:1264] "Started kubelet" Jan 29 10:55:20.140520 kubelet[2579]: I0129 10:55:20.140496 2579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 10:55:20.141531 kubelet[2579]: I0129 10:55:20.140705 2579 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 10:55:20.141531 kubelet[2579]: I0129 10:55:20.141019 2579 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 10:55:20.145759 kubelet[2579]: I0129 10:55:20.139037 2579 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 10:55:20.145759 kubelet[2579]: I0129 10:55:20.143916 2579 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 10:55:20.145759 kubelet[2579]: I0129 10:55:20.144214 2579 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 10:55:20.145759 kubelet[2579]: I0129 10:55:20.144332 2579 reconciler.go:26] "Reconciler: start to sync state" Jan 29 10:55:20.145759 kubelet[2579]: I0129 10:55:20.144510 2579 server.go:455] "Adding debug handlers to kubelet server" Jan 29 10:55:20.148530 kubelet[2579]: I0129 10:55:20.147771 2579 factory.go:221] Registration of the systemd container factory successfully Jan 29 10:55:20.153876 kubelet[2579]: I0129 10:55:20.153849 2579 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 10:55:20.154059 kubelet[2579]: E0129 10:55:20.148437 2579 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 10:55:20.154059 kubelet[2579]: I0129 10:55:20.153332 2579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 10:55:20.157616 kubelet[2579]: I0129 10:55:20.157540 2579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 10:55:20.157616 kubelet[2579]: I0129 10:55:20.157573 2579 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 10:55:20.157616 kubelet[2579]: I0129 10:55:20.157587 2579 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 10:55:20.157760 kubelet[2579]: E0129 10:55:20.157621 2579 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 10:55:20.158650 kubelet[2579]: I0129 10:55:20.158632 2579 factory.go:221] Registration of the containerd container factory successfully Jan 29 10:55:20.197966 kubelet[2579]: I0129 10:55:20.197929 2579 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 10:55:20.197966 kubelet[2579]: I0129 10:55:20.197958 2579 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 10:55:20.197966 kubelet[2579]: I0129 10:55:20.197977 2579 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:55:20.198142 kubelet[2579]: I0129 10:55:20.198114 2579 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 10:55:20.198169 kubelet[2579]: I0129 10:55:20.198139 2579 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 10:55:20.198169 kubelet[2579]: I0129 10:55:20.198158 2579 policy_none.go:49] "None policy: Start" Jan 29 10:55:20.198772 kubelet[2579]: I0129 10:55:20.198754 2579 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 10:55:20.198811 kubelet[2579]: I0129 10:55:20.198778 2579 state_mem.go:35] "Initializing new in-memory state store" Jan 29 10:55:20.198932 kubelet[2579]: I0129 10:55:20.198914 2579 state_mem.go:75] "Updated machine memory state" Jan 29 10:55:20.202917 kubelet[2579]: I0129 10:55:20.202897 2579 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 10:55:20.203103 kubelet[2579]: I0129 10:55:20.203068 2579 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 10:55:20.203220 kubelet[2579]: I0129 10:55:20.203171 2579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 10:55:20.247844 kubelet[2579]: I0129 10:55:20.247762 2579 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 10:55:20.253606 kubelet[2579]: I0129 10:55:20.253569 2579 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 29 10:55:20.253828 kubelet[2579]: I0129 10:55:20.253802 2579 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 10:55:20.258685 kubelet[2579]: I0129 10:55:20.258646 2579 topology_manager.go:215] "Topology Admit Handler" podUID="4fcc98cd0ce60c8d3e4cbe2ae59a1676" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 10:55:20.258903 kubelet[2579]: I0129 10:55:20.258887 2579 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 10:55:20.259102 kubelet[2579]: I0129 10:55:20.259064 2579 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 10:55:20.344889 kubelet[2579]: I0129 10:55:20.344860 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4fcc98cd0ce60c8d3e4cbe2ae59a1676-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4fcc98cd0ce60c8d3e4cbe2ae59a1676\") " pod="kube-system/kube-apiserver-localhost" Jan 29 10:55:20.344889 kubelet[2579]: I0129 10:55:20.344893 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:55:20.345028 kubelet[2579]: I0129 10:55:20.344914 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:55:20.345028 kubelet[2579]: I0129 10:55:20.344943 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:55:20.345028 kubelet[2579]: I0129 10:55:20.344966 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 10:55:20.345028 kubelet[2579]: I0129 10:55:20.344982 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4fcc98cd0ce60c8d3e4cbe2ae59a1676-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4fcc98cd0ce60c8d3e4cbe2ae59a1676\") " pod="kube-system/kube-apiserver-localhost" Jan 29 10:55:20.345028 kubelet[2579]: I0129 10:55:20.344998 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4fcc98cd0ce60c8d3e4cbe2ae59a1676-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4fcc98cd0ce60c8d3e4cbe2ae59a1676\") " pod="kube-system/kube-apiserver-localhost" Jan 29 10:55:20.345139 kubelet[2579]: I0129 10:55:20.345022 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:55:20.345139 kubelet[2579]: I0129 10:55:20.345038 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 10:55:20.569294 kubelet[2579]: E0129 10:55:20.568895 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:20.569294 kubelet[2579]: E0129 10:55:20.568947 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:20.569294 kubelet[2579]: E0129 10:55:20.569047 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:21.134910 kubelet[2579]: I0129 10:55:21.134862 2579 apiserver.go:52] "Watching apiserver" Jan 29 10:55:21.145159 kubelet[2579]: I0129 10:55:21.145111 2579 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 10:55:21.186622 kubelet[2579]: E0129 10:55:21.185459 2579 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 29 10:55:21.186622 kubelet[2579]: E0129 10:55:21.185936 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:21.186622 kubelet[2579]: E0129 10:55:21.186179 2579 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 10:55:21.189744 kubelet[2579]: E0129 10:55:21.187405 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:21.191531 kubelet[2579]: E0129 10:55:21.190917 2579 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 10:55:21.191531 kubelet[2579]: E0129 10:55:21.191228 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:21.206061 kubelet[2579]: I0129 10:55:21.205967 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.20594057 podStartE2EDuration="1.20594057s" podCreationTimestamp="2025-01-29 10:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:55:21.205696097 +0000 UTC m=+1.124920126" watchObservedRunningTime="2025-01-29 10:55:21.20594057 +0000 UTC m=+1.125164599" Jan 29 10:55:21.223121 kubelet[2579]: I0129 10:55:21.223056 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.223038685 podStartE2EDuration="1.223038685s" podCreationTimestamp="2025-01-29 10:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:55:21.22249726 +0000 UTC m=+1.141721289" watchObservedRunningTime="2025-01-29 10:55:21.223038685 +0000 UTC m=+1.142262674" Jan 29 10:55:21.223263 kubelet[2579]: I0129 10:55:21.223141 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.223136162 podStartE2EDuration="1.223136162s" podCreationTimestamp="2025-01-29 10:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:55:21.214226815 +0000 UTC m=+1.133450844" watchObservedRunningTime="2025-01-29 10:55:21.223136162 +0000 UTC m=+1.142360191" Jan 29 10:55:21.740119 sudo[1579]: pam_unix(sudo:session): session closed for user root Jan 29 10:55:21.741366 sshd[1578]: Connection closed by 10.0.0.1 port 40092 Jan 29 10:55:21.741886 sshd-session[1576]: pam_unix(sshd:session): session closed for user core Jan 29 10:55:21.744904 systemd[1]: sshd@4-10.0.0.63:22-10.0.0.1:40092.service: Deactivated successfully. Jan 29 10:55:21.747129 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 10:55:21.747295 systemd[1]: session-5.scope: Consumed 6.271s CPU time, 195.4M memory peak, 0B memory swap peak. Jan 29 10:55:21.748991 systemd-logind[1428]: Session 5 logged out. Waiting for processes to exit. Jan 29 10:55:21.750226 systemd-logind[1428]: Removed session 5. Jan 29 10:55:22.181548 kubelet[2579]: E0129 10:55:22.180621 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:22.181548 kubelet[2579]: E0129 10:55:22.180683 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:22.181548 kubelet[2579]: E0129 10:55:22.180942 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:23.182881 kubelet[2579]: E0129 10:55:23.182396 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:23.182881 kubelet[2579]: E0129 10:55:23.182540 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:30.633261 kubelet[2579]: E0129 10:55:30.632060 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:32.778283 kubelet[2579]: E0129 10:55:32.778244 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:32.849804 kubelet[2579]: E0129 10:55:32.849774 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:33.763264 kubelet[2579]: I0129 10:55:33.763224 2579 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 10:55:33.763651 containerd[1449]: time="2025-01-29T10:55:33.763588027Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 10:55:33.763960 kubelet[2579]: I0129 10:55:33.763834 2579 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 10:55:33.951786 kubelet[2579]: I0129 10:55:33.951174 2579 topology_manager.go:215] "Topology Admit Handler" podUID="25088b8a-5a73-44c2-844e-572d0c9f160e" podNamespace="kube-flannel" podName="kube-flannel-ds-4p4p4" Jan 29 10:55:33.951786 kubelet[2579]: I0129 10:55:33.951369 2579 topology_manager.go:215] "Topology Admit Handler" podUID="509f1e41-f30b-4a22-a610-92e2146a05f9" podNamespace="kube-system" podName="kube-proxy-nbw8x" Jan 29 10:55:33.966822 systemd[1]: Created slice kubepods-besteffort-pod509f1e41_f30b_4a22_a610_92e2146a05f9.slice - libcontainer container kubepods-besteffort-pod509f1e41_f30b_4a22_a610_92e2146a05f9.slice. Jan 29 10:55:33.979167 systemd[1]: Created slice kubepods-burstable-pod25088b8a_5a73_44c2_844e_572d0c9f160e.slice - libcontainer container kubepods-burstable-pod25088b8a_5a73_44c2_844e_572d0c9f160e.slice. Jan 29 10:55:34.107866 update_engine[1430]: I20250129 10:55:34.107785 1430 update_attempter.cc:509] Updating boot flags... Jan 29 10:55:34.132207 kubelet[2579]: I0129 10:55:34.130650 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/25088b8a-5a73-44c2-844e-572d0c9f160e-cni-plugin\") pod \"kube-flannel-ds-4p4p4\" (UID: \"25088b8a-5a73-44c2-844e-572d0c9f160e\") " pod="kube-flannel/kube-flannel-ds-4p4p4" Jan 29 10:55:34.132207 kubelet[2579]: I0129 10:55:34.130698 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/25088b8a-5a73-44c2-844e-572d0c9f160e-cni\") pod \"kube-flannel-ds-4p4p4\" (UID: \"25088b8a-5a73-44c2-844e-572d0c9f160e\") " pod="kube-flannel/kube-flannel-ds-4p4p4" Jan 29 10:55:34.132207 kubelet[2579]: I0129 10:55:34.130803 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/509f1e41-f30b-4a22-a610-92e2146a05f9-lib-modules\") pod \"kube-proxy-nbw8x\" (UID: \"509f1e41-f30b-4a22-a610-92e2146a05f9\") " pod="kube-system/kube-proxy-nbw8x" Jan 29 10:55:34.132207 kubelet[2579]: I0129 10:55:34.130825 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5jvv\" (UniqueName: \"kubernetes.io/projected/25088b8a-5a73-44c2-844e-572d0c9f160e-kube-api-access-c5jvv\") pod \"kube-flannel-ds-4p4p4\" (UID: \"25088b8a-5a73-44c2-844e-572d0c9f160e\") " pod="kube-flannel/kube-flannel-ds-4p4p4" Jan 29 10:55:34.132207 kubelet[2579]: I0129 10:55:34.130941 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/509f1e41-f30b-4a22-a610-92e2146a05f9-kube-proxy\") pod \"kube-proxy-nbw8x\" (UID: \"509f1e41-f30b-4a22-a610-92e2146a05f9\") " pod="kube-system/kube-proxy-nbw8x" Jan 29 10:55:34.133782 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2649) Jan 29 10:55:34.133822 kubelet[2579]: I0129 10:55:34.130958 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/509f1e41-f30b-4a22-a610-92e2146a05f9-xtables-lock\") pod \"kube-proxy-nbw8x\" (UID: \"509f1e41-f30b-4a22-a610-92e2146a05f9\") " pod="kube-system/kube-proxy-nbw8x" Jan 29 10:55:34.133822 kubelet[2579]: I0129 10:55:34.130989 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l22dl\" (UniqueName: \"kubernetes.io/projected/509f1e41-f30b-4a22-a610-92e2146a05f9-kube-api-access-l22dl\") pod \"kube-proxy-nbw8x\" (UID: \"509f1e41-f30b-4a22-a610-92e2146a05f9\") " pod="kube-system/kube-proxy-nbw8x" Jan 29 10:55:34.133822 kubelet[2579]: I0129 10:55:34.131056 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/25088b8a-5a73-44c2-844e-572d0c9f160e-run\") pod \"kube-flannel-ds-4p4p4\" (UID: \"25088b8a-5a73-44c2-844e-572d0c9f160e\") " pod="kube-flannel/kube-flannel-ds-4p4p4" Jan 29 10:55:34.133822 kubelet[2579]: I0129 10:55:34.131086 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/25088b8a-5a73-44c2-844e-572d0c9f160e-flannel-cfg\") pod \"kube-flannel-ds-4p4p4\" (UID: \"25088b8a-5a73-44c2-844e-572d0c9f160e\") " pod="kube-flannel/kube-flannel-ds-4p4p4" Jan 29 10:55:34.133822 kubelet[2579]: I0129 10:55:34.131127 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25088b8a-5a73-44c2-844e-572d0c9f160e-xtables-lock\") pod \"kube-flannel-ds-4p4p4\" (UID: \"25088b8a-5a73-44c2-844e-572d0c9f160e\") " pod="kube-flannel/kube-flannel-ds-4p4p4" Jan 29 10:55:34.169791 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2647) Jan 29 10:55:34.278945 kubelet[2579]: E0129 10:55:34.278902 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:34.279928 containerd[1449]: time="2025-01-29T10:55:34.279639810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nbw8x,Uid:509f1e41-f30b-4a22-a610-92e2146a05f9,Namespace:kube-system,Attempt:0,}" Jan 29 10:55:34.282342 kubelet[2579]: E0129 10:55:34.282107 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:34.282944 containerd[1449]: time="2025-01-29T10:55:34.282630408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4p4p4,Uid:25088b8a-5a73-44c2-844e-572d0c9f160e,Namespace:kube-flannel,Attempt:0,}" Jan 29 10:55:34.305950 containerd[1449]: time="2025-01-29T10:55:34.303919949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:55:34.305950 containerd[1449]: time="2025-01-29T10:55:34.305506767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:55:34.305950 containerd[1449]: time="2025-01-29T10:55:34.305523967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:55:34.305950 containerd[1449]: time="2025-01-29T10:55:34.305613806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:55:34.309147 containerd[1449]: time="2025-01-29T10:55:34.309037238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:55:34.309147 containerd[1449]: time="2025-01-29T10:55:34.309089917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:55:34.309147 containerd[1449]: time="2025-01-29T10:55:34.309105237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:55:34.309321 containerd[1449]: time="2025-01-29T10:55:34.309176556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:55:34.327981 systemd[1]: Started cri-containerd-ac9fe73aac53c1c89cb00ac28d6adac183502606f09cfa83c41ae1a22a905618.scope - libcontainer container ac9fe73aac53c1c89cb00ac28d6adac183502606f09cfa83c41ae1a22a905618. Jan 29 10:55:34.332843 systemd[1]: Started cri-containerd-75ebcf23b2ba100f716ef9e829b91b31e856404bca8055156d0316823f2dadf1.scope - libcontainer container 75ebcf23b2ba100f716ef9e829b91b31e856404bca8055156d0316823f2dadf1. Jan 29 10:55:34.358362 containerd[1449]: time="2025-01-29T10:55:34.357504557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nbw8x,Uid:509f1e41-f30b-4a22-a610-92e2146a05f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"75ebcf23b2ba100f716ef9e829b91b31e856404bca8055156d0316823f2dadf1\"" Jan 29 10:55:34.358591 kubelet[2579]: E0129 10:55:34.358558 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:34.361234 containerd[1449]: time="2025-01-29T10:55:34.361193945Z" level=info msg="CreateContainer within sandbox \"75ebcf23b2ba100f716ef9e829b91b31e856404bca8055156d0316823f2dadf1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 10:55:34.367416 containerd[1449]: time="2025-01-29T10:55:34.367366259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4p4p4,Uid:25088b8a-5a73-44c2-844e-572d0c9f160e,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"ac9fe73aac53c1c89cb00ac28d6adac183502606f09cfa83c41ae1a22a905618\"" Jan 29 10:55:34.368256 kubelet[2579]: E0129 10:55:34.367982 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:34.378234 containerd[1449]: time="2025-01-29T10:55:34.378187627Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 29 10:55:34.385940 containerd[1449]: time="2025-01-29T10:55:34.385896598Z" level=info msg="CreateContainer within sandbox \"75ebcf23b2ba100f716ef9e829b91b31e856404bca8055156d0316823f2dadf1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7c9954dc826eb6fe80afe033ff8c81e56ab1dc28d7da56ca5e2c2faa19200020\"" Jan 29 10:55:34.386526 containerd[1449]: time="2025-01-29T10:55:34.386386552Z" level=info msg="StartContainer for \"7c9954dc826eb6fe80afe033ff8c81e56ab1dc28d7da56ca5e2c2faa19200020\"" Jan 29 10:55:34.417912 systemd[1]: Started cri-containerd-7c9954dc826eb6fe80afe033ff8c81e56ab1dc28d7da56ca5e2c2faa19200020.scope - libcontainer container 7c9954dc826eb6fe80afe033ff8c81e56ab1dc28d7da56ca5e2c2faa19200020. Jan 29 10:55:34.454219 containerd[1449]: time="2025-01-29T10:55:34.452528383Z" level=info msg="StartContainer for \"7c9954dc826eb6fe80afe033ff8c81e56ab1dc28d7da56ca5e2c2faa19200020\" returns successfully" Jan 29 10:55:35.208482 kubelet[2579]: E0129 10:55:35.208174 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:35.401403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1758645445.mount: Deactivated successfully. Jan 29 10:55:35.429438 containerd[1449]: time="2025-01-29T10:55:35.429385481Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:35.430403 containerd[1449]: time="2025-01-29T10:55:35.430349868Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673530" Jan 29 10:55:35.431300 containerd[1449]: time="2025-01-29T10:55:35.431263776Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:35.433385 containerd[1449]: time="2025-01-29T10:55:35.433355588Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:35.434703 containerd[1449]: time="2025-01-29T10:55:35.434164537Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.055932871s" Jan 29 10:55:35.434703 containerd[1449]: time="2025-01-29T10:55:35.434192657Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Jan 29 10:55:35.436165 containerd[1449]: time="2025-01-29T10:55:35.436134831Z" level=info msg="CreateContainer within sandbox \"ac9fe73aac53c1c89cb00ac28d6adac183502606f09cfa83c41ae1a22a905618\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 29 10:55:35.450416 containerd[1449]: time="2025-01-29T10:55:35.450371600Z" level=info msg="CreateContainer within sandbox \"ac9fe73aac53c1c89cb00ac28d6adac183502606f09cfa83c41ae1a22a905618\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"8c441983f9370759e6a355ddffd1aafb436c74a20dedac393af1967889a2ce63\"" Jan 29 10:55:35.451653 containerd[1449]: time="2025-01-29T10:55:35.450906713Z" level=info msg="StartContainer for \"8c441983f9370759e6a355ddffd1aafb436c74a20dedac393af1967889a2ce63\"" Jan 29 10:55:35.469885 systemd[1]: Started cri-containerd-8c441983f9370759e6a355ddffd1aafb436c74a20dedac393af1967889a2ce63.scope - libcontainer container 8c441983f9370759e6a355ddffd1aafb436c74a20dedac393af1967889a2ce63. Jan 29 10:55:35.491328 containerd[1449]: time="2025-01-29T10:55:35.491289934Z" level=info msg="StartContainer for \"8c441983f9370759e6a355ddffd1aafb436c74a20dedac393af1967889a2ce63\" returns successfully" Jan 29 10:55:35.496373 systemd[1]: cri-containerd-8c441983f9370759e6a355ddffd1aafb436c74a20dedac393af1967889a2ce63.scope: Deactivated successfully. Jan 29 10:55:35.533789 containerd[1449]: time="2025-01-29T10:55:35.533708567Z" level=info msg="shim disconnected" id=8c441983f9370759e6a355ddffd1aafb436c74a20dedac393af1967889a2ce63 namespace=k8s.io Jan 29 10:55:35.534179 containerd[1449]: time="2025-01-29T10:55:35.534009563Z" level=warning msg="cleaning up after shim disconnected" id=8c441983f9370759e6a355ddffd1aafb436c74a20dedac393af1967889a2ce63 namespace=k8s.io Jan 29 10:55:35.534179 containerd[1449]: time="2025-01-29T10:55:35.534029083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:55:36.211318 kubelet[2579]: E0129 10:55:36.211068 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:36.212780 containerd[1449]: time="2025-01-29T10:55:36.212735391Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 29 10:55:36.225315 kubelet[2579]: I0129 10:55:36.224947 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nbw8x" podStartSLOduration=3.224928716 podStartE2EDuration="3.224928716s" podCreationTimestamp="2025-01-29 10:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:55:35.217412632 +0000 UTC m=+15.136636661" watchObservedRunningTime="2025-01-29 10:55:36.224928716 +0000 UTC m=+16.144152745" Jan 29 10:55:37.215632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3767048309.mount: Deactivated successfully. Jan 29 10:55:38.423866 containerd[1449]: time="2025-01-29T10:55:38.423812794Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:38.424260 containerd[1449]: time="2025-01-29T10:55:38.424213069Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Jan 29 10:55:38.425131 containerd[1449]: time="2025-01-29T10:55:38.425099819Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:38.427945 containerd[1449]: time="2025-01-29T10:55:38.427913106Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:55:38.429110 containerd[1449]: time="2025-01-29T10:55:38.429065493Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 2.216279622s" Jan 29 10:55:38.429110 containerd[1449]: time="2025-01-29T10:55:38.429110213Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Jan 29 10:55:38.434475 containerd[1449]: time="2025-01-29T10:55:38.434432831Z" level=info msg="CreateContainer within sandbox \"ac9fe73aac53c1c89cb00ac28d6adac183502606f09cfa83c41ae1a22a905618\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 10:55:38.447026 containerd[1449]: time="2025-01-29T10:55:38.446879607Z" level=info msg="CreateContainer within sandbox \"ac9fe73aac53c1c89cb00ac28d6adac183502606f09cfa83c41ae1a22a905618\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4aafe9a521c4c1935da130cd8b931fda22f562e940f5472c02d584b6b9674d78\"" Jan 29 10:55:38.447758 containerd[1449]: time="2025-01-29T10:55:38.447271883Z" level=info msg="StartContainer for \"4aafe9a521c4c1935da130cd8b931fda22f562e940f5472c02d584b6b9674d78\"" Jan 29 10:55:38.474901 systemd[1]: Started cri-containerd-4aafe9a521c4c1935da130cd8b931fda22f562e940f5472c02d584b6b9674d78.scope - libcontainer container 4aafe9a521c4c1935da130cd8b931fda22f562e940f5472c02d584b6b9674d78. Jan 29 10:55:38.497316 systemd[1]: cri-containerd-4aafe9a521c4c1935da130cd8b931fda22f562e940f5472c02d584b6b9674d78.scope: Deactivated successfully. Jan 29 10:55:38.497628 containerd[1449]: time="2025-01-29T10:55:38.497578861Z" level=info msg="StartContainer for \"4aafe9a521c4c1935da130cd8b931fda22f562e940f5472c02d584b6b9674d78\" returns successfully" Jan 29 10:55:38.513711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4aafe9a521c4c1935da130cd8b931fda22f562e940f5472c02d584b6b9674d78-rootfs.mount: Deactivated successfully. Jan 29 10:55:38.578776 kubelet[2579]: I0129 10:55:38.578538 2579 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 10:55:38.629296 containerd[1449]: time="2025-01-29T10:55:38.629230459Z" level=info msg="shim disconnected" id=4aafe9a521c4c1935da130cd8b931fda22f562e940f5472c02d584b6b9674d78 namespace=k8s.io Jan 29 10:55:38.629296 containerd[1449]: time="2025-01-29T10:55:38.629282498Z" level=warning msg="cleaning up after shim disconnected" id=4aafe9a521c4c1935da130cd8b931fda22f562e940f5472c02d584b6b9674d78 namespace=k8s.io Jan 29 10:55:38.629296 containerd[1449]: time="2025-01-29T10:55:38.629290578Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:55:38.631072 kubelet[2579]: I0129 10:55:38.630616 2579 topology_manager.go:215] "Topology Admit Handler" podUID="c2048694-0846-4ce3-a93f-48ae07a9700c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wxkfj" Jan 29 10:55:38.631072 kubelet[2579]: I0129 10:55:38.630813 2579 topology_manager.go:215] "Topology Admit Handler" podUID="d552a70f-1b82-43d6-9004-9ffc86a5ef9c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-45fzm" Jan 29 10:55:38.646001 systemd[1]: Created slice kubepods-burstable-podc2048694_0846_4ce3_a93f_48ae07a9700c.slice - libcontainer container kubepods-burstable-podc2048694_0846_4ce3_a93f_48ae07a9700c.slice. Jan 29 10:55:38.650560 systemd[1]: Created slice kubepods-burstable-podd552a70f_1b82_43d6_9004_9ffc86a5ef9c.slice - libcontainer container kubepods-burstable-podd552a70f_1b82_43d6_9004_9ffc86a5ef9c.slice. Jan 29 10:55:38.763759 kubelet[2579]: I0129 10:55:38.763595 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d552a70f-1b82-43d6-9004-9ffc86a5ef9c-config-volume\") pod \"coredns-7db6d8ff4d-45fzm\" (UID: \"d552a70f-1b82-43d6-9004-9ffc86a5ef9c\") " pod="kube-system/coredns-7db6d8ff4d-45fzm" Jan 29 10:55:38.763759 kubelet[2579]: I0129 10:55:38.763637 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksg2l\" (UniqueName: \"kubernetes.io/projected/c2048694-0846-4ce3-a93f-48ae07a9700c-kube-api-access-ksg2l\") pod \"coredns-7db6d8ff4d-wxkfj\" (UID: \"c2048694-0846-4ce3-a93f-48ae07a9700c\") " pod="kube-system/coredns-7db6d8ff4d-wxkfj" Jan 29 10:55:38.763759 kubelet[2579]: I0129 10:55:38.763659 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2048694-0846-4ce3-a93f-48ae07a9700c-config-volume\") pod \"coredns-7db6d8ff4d-wxkfj\" (UID: \"c2048694-0846-4ce3-a93f-48ae07a9700c\") " pod="kube-system/coredns-7db6d8ff4d-wxkfj" Jan 29 10:55:38.763759 kubelet[2579]: I0129 10:55:38.763687 2579 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd7tz\" (UniqueName: \"kubernetes.io/projected/d552a70f-1b82-43d6-9004-9ffc86a5ef9c-kube-api-access-xd7tz\") pod \"coredns-7db6d8ff4d-45fzm\" (UID: \"d552a70f-1b82-43d6-9004-9ffc86a5ef9c\") " pod="kube-system/coredns-7db6d8ff4d-45fzm" Jan 29 10:55:38.949941 kubelet[2579]: E0129 10:55:38.949757 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:38.951560 containerd[1449]: time="2025-01-29T10:55:38.951517973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wxkfj,Uid:c2048694-0846-4ce3-a93f-48ae07a9700c,Namespace:kube-system,Attempt:0,}" Jan 29 10:55:38.954170 kubelet[2579]: E0129 10:55:38.953987 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:38.954443 containerd[1449]: time="2025-01-29T10:55:38.954410179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-45fzm,Uid:d552a70f-1b82-43d6-9004-9ffc86a5ef9c,Namespace:kube-system,Attempt:0,}" Jan 29 10:55:39.036435 containerd[1449]: time="2025-01-29T10:55:39.036283411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-45fzm,Uid:d552a70f-1b82-43d6-9004-9ffc86a5ef9c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e2436d3f8b2a5b54710873de4eda25b583aedc3b8acebf03555a4f489b3c4315\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 10:55:39.036567 kubelet[2579]: E0129 10:55:39.036529 2579 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2436d3f8b2a5b54710873de4eda25b583aedc3b8acebf03555a4f489b3c4315\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 10:55:39.036608 kubelet[2579]: E0129 10:55:39.036598 2579 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2436d3f8b2a5b54710873de4eda25b583aedc3b8acebf03555a4f489b3c4315\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-45fzm" Jan 29 10:55:39.036634 kubelet[2579]: E0129 10:55:39.036618 2579 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2436d3f8b2a5b54710873de4eda25b583aedc3b8acebf03555a4f489b3c4315\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-45fzm" Jan 29 10:55:39.036687 kubelet[2579]: E0129 10:55:39.036655 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-45fzm_kube-system(d552a70f-1b82-43d6-9004-9ffc86a5ef9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-45fzm_kube-system(d552a70f-1b82-43d6-9004-9ffc86a5ef9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2436d3f8b2a5b54710873de4eda25b583aedc3b8acebf03555a4f489b3c4315\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-45fzm" podUID="d552a70f-1b82-43d6-9004-9ffc86a5ef9c" Jan 29 10:55:39.037444 containerd[1449]: time="2025-01-29T10:55:39.037379079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wxkfj,Uid:c2048694-0846-4ce3-a93f-48ae07a9700c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f6a96715d681d6891906fc51c49b0dd6fca64c46bc7cbb505f4c8084c4207cdb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 10:55:39.037617 kubelet[2579]: E0129 10:55:39.037590 2579 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a96715d681d6891906fc51c49b0dd6fca64c46bc7cbb505f4c8084c4207cdb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 10:55:39.037653 kubelet[2579]: E0129 10:55:39.037634 2579 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a96715d681d6891906fc51c49b0dd6fca64c46bc7cbb505f4c8084c4207cdb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-wxkfj" Jan 29 10:55:39.037700 kubelet[2579]: E0129 10:55:39.037658 2579 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a96715d681d6891906fc51c49b0dd6fca64c46bc7cbb505f4c8084c4207cdb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-wxkfj" Jan 29 10:55:39.037791 kubelet[2579]: E0129 10:55:39.037699 2579 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wxkfj_kube-system(c2048694-0846-4ce3-a93f-48ae07a9700c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wxkfj_kube-system(c2048694-0846-4ce3-a93f-48ae07a9700c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6a96715d681d6891906fc51c49b0dd6fca64c46bc7cbb505f4c8084c4207cdb\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-wxkfj" podUID="c2048694-0846-4ce3-a93f-48ae07a9700c" Jan 29 10:55:39.225820 kubelet[2579]: E0129 10:55:39.218489 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:39.227711 containerd[1449]: time="2025-01-29T10:55:39.220479498Z" level=info msg="CreateContainer within sandbox \"ac9fe73aac53c1c89cb00ac28d6adac183502606f09cfa83c41ae1a22a905618\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 29 10:55:39.235113 containerd[1449]: time="2025-01-29T10:55:39.235068577Z" level=info msg="CreateContainer within sandbox \"ac9fe73aac53c1c89cb00ac28d6adac183502606f09cfa83c41ae1a22a905618\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e6da5a7e028636611a3ec39c7552b0a4fb03fdcb7b133ce1c3c7919031bc065b\"" Jan 29 10:55:39.235627 containerd[1449]: time="2025-01-29T10:55:39.235599771Z" level=info msg="StartContainer for \"e6da5a7e028636611a3ec39c7552b0a4fb03fdcb7b133ce1c3c7919031bc065b\"" Jan 29 10:55:39.262891 systemd[1]: Started cri-containerd-e6da5a7e028636611a3ec39c7552b0a4fb03fdcb7b133ce1c3c7919031bc065b.scope - libcontainer container e6da5a7e028636611a3ec39c7552b0a4fb03fdcb7b133ce1c3c7919031bc065b. Jan 29 10:55:39.285810 containerd[1449]: time="2025-01-29T10:55:39.285766618Z" level=info msg="StartContainer for \"e6da5a7e028636611a3ec39c7552b0a4fb03fdcb7b133ce1c3c7919031bc065b\" returns successfully" Jan 29 10:55:40.223395 kubelet[2579]: E0129 10:55:40.223363 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:40.233423 kubelet[2579]: I0129 10:55:40.233372 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-4p4p4" podStartSLOduration=3.170693168 podStartE2EDuration="7.233354875s" podCreationTimestamp="2025-01-29 10:55:33 +0000 UTC" firstStartedPulling="2025-01-29 10:55:34.369077875 +0000 UTC m=+14.288301904" lastFinishedPulling="2025-01-29 10:55:38.431739622 +0000 UTC m=+18.350963611" observedRunningTime="2025-01-29 10:55:40.232439524 +0000 UTC m=+20.151663673" watchObservedRunningTime="2025-01-29 10:55:40.233354875 +0000 UTC m=+20.152578904" Jan 29 10:55:40.388519 systemd-networkd[1380]: flannel.1: Link UP Jan 29 10:55:40.388527 systemd-networkd[1380]: flannel.1: Gained carrier Jan 29 10:55:41.223681 kubelet[2579]: E0129 10:55:41.223630 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:42.196833 systemd-networkd[1380]: flannel.1: Gained IPv6LL Jan 29 10:55:45.055280 systemd[1]: Started sshd@5-10.0.0.63:22-10.0.0.1:41654.service - OpenSSH per-connection server daemon (10.0.0.1:41654). Jan 29 10:55:45.099936 sshd[3220]: Accepted publickey for core from 10.0.0.1 port 41654 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:55:45.101213 sshd-session[3220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:55:45.105566 systemd-logind[1428]: New session 6 of user core. Jan 29 10:55:45.116879 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 10:55:45.229780 sshd[3222]: Connection closed by 10.0.0.1 port 41654 Jan 29 10:55:45.230104 sshd-session[3220]: pam_unix(sshd:session): session closed for user core Jan 29 10:55:45.233110 systemd[1]: sshd@5-10.0.0.63:22-10.0.0.1:41654.service: Deactivated successfully. Jan 29 10:55:45.235186 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 10:55:45.236122 systemd-logind[1428]: Session 6 logged out. Waiting for processes to exit. Jan 29 10:55:45.236954 systemd-logind[1428]: Removed session 6. Jan 29 10:55:49.158421 kubelet[2579]: E0129 10:55:49.158373 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:49.158871 containerd[1449]: time="2025-01-29T10:55:49.158762952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wxkfj,Uid:c2048694-0846-4ce3-a93f-48ae07a9700c,Namespace:kube-system,Attempt:0,}" Jan 29 10:55:49.195175 systemd-networkd[1380]: cni0: Link UP Jan 29 10:55:49.195185 systemd-networkd[1380]: cni0: Gained carrier Jan 29 10:55:49.200472 systemd-networkd[1380]: cni0: Lost carrier Jan 29 10:55:49.208670 kernel: cni0: port 1(veth0732306e) entered blocking state Jan 29 10:55:49.208771 kernel: cni0: port 1(veth0732306e) entered disabled state Jan 29 10:55:49.208791 kernel: veth0732306e: entered allmulticast mode Jan 29 10:55:49.208804 kernel: veth0732306e: entered promiscuous mode Jan 29 10:55:49.208816 kernel: cni0: port 1(veth0732306e) entered blocking state Jan 29 10:55:49.205228 systemd-networkd[1380]: veth0732306e: Link UP Jan 29 10:55:49.209484 kernel: cni0: port 1(veth0732306e) entered forwarding state Jan 29 10:55:49.214741 kernel: cni0: port 1(veth0732306e) entered disabled state Jan 29 10:55:49.224781 kernel: cni0: port 1(veth0732306e) entered blocking state Jan 29 10:55:49.224896 kernel: cni0: port 1(veth0732306e) entered forwarding state Jan 29 10:55:49.224900 systemd-networkd[1380]: veth0732306e: Gained carrier Jan 29 10:55:49.225224 systemd-networkd[1380]: cni0: Gained carrier Jan 29 10:55:49.228578 containerd[1449]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001148e8), "name":"cbr0", "type":"bridge"} Jan 29 10:55:49.228578 containerd[1449]: delegateAdd: netconf sent to delegate plugin: Jan 29 10:55:49.250299 containerd[1449]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T10:55:49.250208606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:55:49.250299 containerd[1449]: time="2025-01-29T10:55:49.250257765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:55:49.250299 containerd[1449]: time="2025-01-29T10:55:49.250268725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:55:49.250481 containerd[1449]: time="2025-01-29T10:55:49.250334885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:55:49.272869 systemd[1]: Started cri-containerd-4933cab87581ae2944f3a6511d05e95e86c964c5266f0e8240ae2c711df79624.scope - libcontainer container 4933cab87581ae2944f3a6511d05e95e86c964c5266f0e8240ae2c711df79624. Jan 29 10:55:49.282997 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 10:55:49.301331 containerd[1449]: time="2025-01-29T10:55:49.301288633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wxkfj,Uid:c2048694-0846-4ce3-a93f-48ae07a9700c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4933cab87581ae2944f3a6511d05e95e86c964c5266f0e8240ae2c711df79624\"" Jan 29 10:55:49.302051 kubelet[2579]: E0129 10:55:49.302026 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:49.305018 containerd[1449]: time="2025-01-29T10:55:49.304985607Z" level=info msg="CreateContainer within sandbox \"4933cab87581ae2944f3a6511d05e95e86c964c5266f0e8240ae2c711df79624\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 10:55:49.315364 containerd[1449]: time="2025-01-29T10:55:49.315317491Z" level=info msg="CreateContainer within sandbox \"4933cab87581ae2944f3a6511d05e95e86c964c5266f0e8240ae2c711df79624\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ebc7aea0a0cdae014ac35317bbd18ee2ba246e7b3751756c623b7a2dde35a2af\"" Jan 29 10:55:49.316034 containerd[1449]: time="2025-01-29T10:55:49.315881167Z" level=info msg="StartContainer for \"ebc7aea0a0cdae014ac35317bbd18ee2ba246e7b3751756c623b7a2dde35a2af\"" Jan 29 10:55:49.340899 systemd[1]: Started cri-containerd-ebc7aea0a0cdae014ac35317bbd18ee2ba246e7b3751756c623b7a2dde35a2af.scope - libcontainer container ebc7aea0a0cdae014ac35317bbd18ee2ba246e7b3751756c623b7a2dde35a2af. Jan 29 10:55:49.367767 containerd[1449]: time="2025-01-29T10:55:49.366430399Z" level=info msg="StartContainer for \"ebc7aea0a0cdae014ac35317bbd18ee2ba246e7b3751756c623b7a2dde35a2af\" returns successfully" Jan 29 10:55:50.259045 kubelet[2579]: E0129 10:55:50.256312 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:50.262209 systemd[1]: Started sshd@6-10.0.0.63:22-10.0.0.1:41660.service - OpenSSH per-connection server daemon (10.0.0.1:41660). Jan 29 10:55:50.283407 kubelet[2579]: I0129 10:55:50.272541 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wxkfj" podStartSLOduration=16.272526108 podStartE2EDuration="16.272526108s" podCreationTimestamp="2025-01-29 10:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:55:50.269906966 +0000 UTC m=+30.189130995" watchObservedRunningTime="2025-01-29 10:55:50.272526108 +0000 UTC m=+30.191750137" Jan 29 10:55:50.332116 sshd[3380]: Accepted publickey for core from 10.0.0.1 port 41660 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:55:50.333628 sshd-session[3380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:55:50.338535 systemd-logind[1428]: New session 7 of user core. Jan 29 10:55:50.347919 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 10:55:50.388888 systemd-networkd[1380]: cni0: Gained IPv6LL Jan 29 10:55:50.466116 sshd[3388]: Connection closed by 10.0.0.1 port 41660 Jan 29 10:55:50.466464 sshd-session[3380]: pam_unix(sshd:session): session closed for user core Jan 29 10:55:50.470945 systemd[1]: sshd@6-10.0.0.63:22-10.0.0.1:41660.service: Deactivated successfully. Jan 29 10:55:50.472908 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 10:55:50.473670 systemd-logind[1428]: Session 7 logged out. Waiting for processes to exit. Jan 29 10:55:50.474812 systemd-logind[1428]: Removed session 7. Jan 29 10:55:50.900868 systemd-networkd[1380]: veth0732306e: Gained IPv6LL Jan 29 10:55:51.159049 kubelet[2579]: E0129 10:55:51.158823 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:51.160453 containerd[1449]: time="2025-01-29T10:55:51.160141269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-45fzm,Uid:d552a70f-1b82-43d6-9004-9ffc86a5ef9c,Namespace:kube-system,Attempt:0,}" Jan 29 10:55:51.179084 systemd-networkd[1380]: veth79bc05d7: Link UP Jan 29 10:55:51.180951 kernel: cni0: port 2(veth79bc05d7) entered blocking state Jan 29 10:55:51.181005 kernel: cni0: port 2(veth79bc05d7) entered disabled state Jan 29 10:55:51.181037 kernel: veth79bc05d7: entered allmulticast mode Jan 29 10:55:51.182217 kernel: veth79bc05d7: entered promiscuous mode Jan 29 10:55:51.187276 kernel: cni0: port 2(veth79bc05d7) entered blocking state Jan 29 10:55:51.187336 kernel: cni0: port 2(veth79bc05d7) entered forwarding state Jan 29 10:55:51.187277 systemd-networkd[1380]: veth79bc05d7: Gained carrier Jan 29 10:55:51.189025 containerd[1449]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} Jan 29 10:55:51.189025 containerd[1449]: delegateAdd: netconf sent to delegate plugin: Jan 29 10:55:51.203292 containerd[1449]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T10:55:51.203206777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:55:51.203657 containerd[1449]: time="2025-01-29T10:55:51.203306296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:55:51.203657 containerd[1449]: time="2025-01-29T10:55:51.203645494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:55:51.203778 containerd[1449]: time="2025-01-29T10:55:51.203750893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:55:51.227953 systemd[1]: Started cri-containerd-79323a0b1b291f64c9727724567c7614709a16ca8cfdba9400308d740e2c14a8.scope - libcontainer container 79323a0b1b291f64c9727724567c7614709a16ca8cfdba9400308d740e2c14a8. Jan 29 10:55:51.237115 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 10:55:51.255423 containerd[1449]: time="2025-01-29T10:55:51.255385823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-45fzm,Uid:d552a70f-1b82-43d6-9004-9ffc86a5ef9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"79323a0b1b291f64c9727724567c7614709a16ca8cfdba9400308d740e2c14a8\"" Jan 29 10:55:51.256289 kubelet[2579]: E0129 10:55:51.256265 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:51.257953 containerd[1449]: time="2025-01-29T10:55:51.257928965Z" level=info msg="CreateContainer within sandbox \"79323a0b1b291f64c9727724567c7614709a16ca8cfdba9400308d740e2c14a8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 10:55:51.273727 kubelet[2579]: E0129 10:55:51.273680 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:51.278533 containerd[1449]: time="2025-01-29T10:55:51.278501466Z" level=info msg="CreateContainer within sandbox \"79323a0b1b291f64c9727724567c7614709a16ca8cfdba9400308d740e2c14a8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"81115cf3ede831c4f02691921d5cf368a72c5d479120506f0ebb6389e52ba3e9\"" Jan 29 10:55:51.278938 containerd[1449]: time="2025-01-29T10:55:51.278911423Z" level=info msg="StartContainer for \"81115cf3ede831c4f02691921d5cf368a72c5d479120506f0ebb6389e52ba3e9\"" Jan 29 10:55:51.303870 systemd[1]: Started cri-containerd-81115cf3ede831c4f02691921d5cf368a72c5d479120506f0ebb6389e52ba3e9.scope - libcontainer container 81115cf3ede831c4f02691921d5cf368a72c5d479120506f0ebb6389e52ba3e9. Jan 29 10:55:51.333019 containerd[1449]: time="2025-01-29T10:55:51.332967776Z" level=info msg="StartContainer for \"81115cf3ede831c4f02691921d5cf368a72c5d479120506f0ebb6389e52ba3e9\" returns successfully" Jan 29 10:55:52.276483 kubelet[2579]: E0129 10:55:52.276385 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:52.276483 kubelet[2579]: E0129 10:55:52.276394 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:52.288502 kubelet[2579]: I0129 10:55:52.287325 2579 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-45fzm" podStartSLOduration=18.287311206 podStartE2EDuration="18.287311206s" podCreationTimestamp="2025-01-29 10:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:55:52.287200167 +0000 UTC m=+32.206424196" watchObservedRunningTime="2025-01-29 10:55:52.287311206 +0000 UTC m=+32.206535235" Jan 29 10:55:52.308944 systemd-networkd[1380]: veth79bc05d7: Gained IPv6LL Jan 29 10:55:53.278361 kubelet[2579]: E0129 10:55:53.278325 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:54.288913 kubelet[2579]: E0129 10:55:54.288881 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:55.290570 kubelet[2579]: E0129 10:55:55.290535 2579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 10:55:55.481092 systemd[1]: Started sshd@7-10.0.0.63:22-10.0.0.1:42026.service - OpenSSH per-connection server daemon (10.0.0.1:42026). Jan 29 10:55:55.528304 sshd[3540]: Accepted publickey for core from 10.0.0.1 port 42026 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:55:55.529582 sshd-session[3540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:55:55.533500 systemd-logind[1428]: New session 8 of user core. Jan 29 10:55:55.542924 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 10:55:55.655484 sshd[3557]: Connection closed by 10.0.0.1 port 42026 Jan 29 10:55:55.656022 sshd-session[3540]: pam_unix(sshd:session): session closed for user core Jan 29 10:55:55.664020 systemd[1]: sshd@7-10.0.0.63:22-10.0.0.1:42026.service: Deactivated successfully. Jan 29 10:55:55.666160 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 10:55:55.667592 systemd-logind[1428]: Session 8 logged out. Waiting for processes to exit. Jan 29 10:55:55.670886 systemd[1]: Started sshd@8-10.0.0.63:22-10.0.0.1:42040.service - OpenSSH per-connection server daemon (10.0.0.1:42040). Jan 29 10:55:55.672085 systemd-logind[1428]: Removed session 8. Jan 29 10:55:55.713847 sshd[3573]: Accepted publickey for core from 10.0.0.1 port 42040 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:55:55.714947 sshd-session[3573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:55:55.718450 systemd-logind[1428]: New session 9 of user core. Jan 29 10:55:55.725992 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 10:55:55.871735 sshd[3575]: Connection closed by 10.0.0.1 port 42040 Jan 29 10:55:55.872093 sshd-session[3573]: pam_unix(sshd:session): session closed for user core Jan 29 10:55:55.883541 systemd[1]: sshd@8-10.0.0.63:22-10.0.0.1:42040.service: Deactivated successfully. Jan 29 10:55:55.886800 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 10:55:55.889925 systemd-logind[1428]: Session 9 logged out. Waiting for processes to exit. Jan 29 10:55:55.902103 systemd[1]: Started sshd@9-10.0.0.63:22-10.0.0.1:42048.service - OpenSSH per-connection server daemon (10.0.0.1:42048). Jan 29 10:55:55.904189 systemd-logind[1428]: Removed session 9. Jan 29 10:55:55.943466 sshd[3586]: Accepted publickey for core from 10.0.0.1 port 42048 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:55:55.944698 sshd-session[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:55:55.951769 systemd-logind[1428]: New session 10 of user core. Jan 29 10:55:55.960884 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 10:55:56.077119 sshd[3588]: Connection closed by 10.0.0.1 port 42048 Jan 29 10:55:56.077693 sshd-session[3586]: pam_unix(sshd:session): session closed for user core Jan 29 10:55:56.081037 systemd[1]: sshd@9-10.0.0.63:22-10.0.0.1:42048.service: Deactivated successfully. Jan 29 10:55:56.083285 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 10:55:56.084186 systemd-logind[1428]: Session 10 logged out. Waiting for processes to exit. Jan 29 10:55:56.085173 systemd-logind[1428]: Removed session 10. Jan 29 10:56:01.088777 systemd[1]: Started sshd@10-10.0.0.63:22-10.0.0.1:42058.service - OpenSSH per-connection server daemon (10.0.0.1:42058). Jan 29 10:56:01.132367 sshd[3623]: Accepted publickey for core from 10.0.0.1 port 42058 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:56:01.133536 sshd-session[3623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:01.138396 systemd-logind[1428]: New session 11 of user core. Jan 29 10:56:01.156905 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 10:56:01.265188 sshd[3625]: Connection closed by 10.0.0.1 port 42058 Jan 29 10:56:01.265590 sshd-session[3623]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:01.277037 systemd[1]: sshd@10-10.0.0.63:22-10.0.0.1:42058.service: Deactivated successfully. Jan 29 10:56:01.279271 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 10:56:01.280776 systemd-logind[1428]: Session 11 logged out. Waiting for processes to exit. Jan 29 10:56:01.287996 systemd[1]: Started sshd@11-10.0.0.63:22-10.0.0.1:42066.service - OpenSSH per-connection server daemon (10.0.0.1:42066). Jan 29 10:56:01.289089 systemd-logind[1428]: Removed session 11. Jan 29 10:56:01.330391 sshd[3638]: Accepted publickey for core from 10.0.0.1 port 42066 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:56:01.331968 sshd-session[3638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:01.335936 systemd-logind[1428]: New session 12 of user core. Jan 29 10:56:01.345882 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 10:56:01.557379 sshd[3640]: Connection closed by 10.0.0.1 port 42066 Jan 29 10:56:01.557754 sshd-session[3638]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:01.564151 systemd[1]: sshd@11-10.0.0.63:22-10.0.0.1:42066.service: Deactivated successfully. Jan 29 10:56:01.565629 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 10:56:01.569065 systemd-logind[1428]: Session 12 logged out. Waiting for processes to exit. Jan 29 10:56:01.575135 systemd[1]: Started sshd@12-10.0.0.63:22-10.0.0.1:42080.service - OpenSSH per-connection server daemon (10.0.0.1:42080). Jan 29 10:56:01.577157 systemd-logind[1428]: Removed session 12. Jan 29 10:56:01.615756 sshd[3651]: Accepted publickey for core from 10.0.0.1 port 42080 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:56:01.617137 sshd-session[3651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:01.621340 systemd-logind[1428]: New session 13 of user core. Jan 29 10:56:01.635892 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 10:56:02.823203 sshd[3653]: Connection closed by 10.0.0.1 port 42080 Jan 29 10:56:02.824821 sshd-session[3651]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:02.835173 systemd[1]: sshd@12-10.0.0.63:22-10.0.0.1:42080.service: Deactivated successfully. Jan 29 10:56:02.837817 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 10:56:02.841039 systemd-logind[1428]: Session 13 logged out. Waiting for processes to exit. Jan 29 10:56:02.855423 systemd[1]: Started sshd@13-10.0.0.63:22-10.0.0.1:38322.service - OpenSSH per-connection server daemon (10.0.0.1:38322). Jan 29 10:56:02.858071 systemd-logind[1428]: Removed session 13. Jan 29 10:56:02.895433 sshd[3673]: Accepted publickey for core from 10.0.0.1 port 38322 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:56:02.896791 sshd-session[3673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:02.900786 systemd-logind[1428]: New session 14 of user core. Jan 29 10:56:02.906864 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 10:56:03.113879 sshd[3675]: Connection closed by 10.0.0.1 port 38322 Jan 29 10:56:03.113623 sshd-session[3673]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:03.123215 systemd[1]: sshd@13-10.0.0.63:22-10.0.0.1:38322.service: Deactivated successfully. Jan 29 10:56:03.124597 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 10:56:03.126026 systemd-logind[1428]: Session 14 logged out. Waiting for processes to exit. Jan 29 10:56:03.128187 systemd[1]: Started sshd@14-10.0.0.63:22-10.0.0.1:38334.service - OpenSSH per-connection server daemon (10.0.0.1:38334). Jan 29 10:56:03.129160 systemd-logind[1428]: Removed session 14. Jan 29 10:56:03.171504 sshd[3686]: Accepted publickey for core from 10.0.0.1 port 38334 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:56:03.172849 sshd-session[3686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:03.176530 systemd-logind[1428]: New session 15 of user core. Jan 29 10:56:03.185903 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 10:56:03.291445 sshd[3688]: Connection closed by 10.0.0.1 port 38334 Jan 29 10:56:03.291807 sshd-session[3686]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:03.295011 systemd[1]: sshd@14-10.0.0.63:22-10.0.0.1:38334.service: Deactivated successfully. Jan 29 10:56:03.296849 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 10:56:03.299210 systemd-logind[1428]: Session 15 logged out. Waiting for processes to exit. Jan 29 10:56:03.300777 systemd-logind[1428]: Removed session 15. Jan 29 10:56:08.302393 systemd[1]: Started sshd@15-10.0.0.63:22-10.0.0.1:38336.service - OpenSSH per-connection server daemon (10.0.0.1:38336). Jan 29 10:56:08.348617 sshd[3727]: Accepted publickey for core from 10.0.0.1 port 38336 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:56:08.349926 sshd-session[3727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:08.355169 systemd-logind[1428]: New session 16 of user core. Jan 29 10:56:08.364885 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 10:56:08.487329 sshd[3729]: Connection closed by 10.0.0.1 port 38336 Jan 29 10:56:08.487667 sshd-session[3727]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:08.491749 systemd[1]: sshd@15-10.0.0.63:22-10.0.0.1:38336.service: Deactivated successfully. Jan 29 10:56:08.494655 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 10:56:08.495970 systemd-logind[1428]: Session 16 logged out. Waiting for processes to exit. Jan 29 10:56:08.496795 systemd-logind[1428]: Removed session 16. Jan 29 10:56:13.498093 systemd[1]: Started sshd@16-10.0.0.63:22-10.0.0.1:47028.service - OpenSSH per-connection server daemon (10.0.0.1:47028). Jan 29 10:56:13.540506 sshd[3763]: Accepted publickey for core from 10.0.0.1 port 47028 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:56:13.541579 sshd-session[3763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:13.544878 systemd-logind[1428]: New session 17 of user core. Jan 29 10:56:13.561856 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 10:56:13.667252 sshd[3765]: Connection closed by 10.0.0.1 port 47028 Jan 29 10:56:13.666443 sshd-session[3763]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:13.669272 systemd[1]: sshd@16-10.0.0.63:22-10.0.0.1:47028.service: Deactivated successfully. Jan 29 10:56:13.671172 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 10:56:13.671989 systemd-logind[1428]: Session 17 logged out. Waiting for processes to exit. Jan 29 10:56:13.672727 systemd-logind[1428]: Removed session 17. Jan 29 10:56:18.710105 systemd[1]: Started sshd@17-10.0.0.63:22-10.0.0.1:47040.service - OpenSSH per-connection server daemon (10.0.0.1:47040). Jan 29 10:56:18.758783 sshd[3798]: Accepted publickey for core from 10.0.0.1 port 47040 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 10:56:18.762554 sshd-session[3798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:18.767226 systemd-logind[1428]: New session 18 of user core. Jan 29 10:56:18.778906 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 10:56:18.902414 sshd[3800]: Connection closed by 10.0.0.1 port 47040 Jan 29 10:56:18.902877 sshd-session[3798]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:18.906205 systemd[1]: sshd@17-10.0.0.63:22-10.0.0.1:47040.service: Deactivated successfully. Jan 29 10:56:18.908037 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 10:56:18.910260 systemd-logind[1428]: Session 18 logged out. Waiting for processes to exit. Jan 29 10:56:18.911180 systemd-logind[1428]: Removed session 18.