Jan 30 13:05:40.061404 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 13:05:40.061456 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 30 13:05:40.061468 kernel: KASLR enabled Jan 30 13:05:40.061474 kernel: efi: EFI v2.7 by EDK II Jan 30 13:05:40.061480 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jan 30 13:05:40.061486 kernel: random: crng init done Jan 30 13:05:40.061492 kernel: secureboot: Secure boot disabled Jan 30 13:05:40.061498 kernel: ACPI: Early table checksum verification disabled Jan 30 13:05:40.061504 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 30 13:05:40.061511 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:05:40.061517 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:05:40.061523 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:05:40.061529 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:05:40.061535 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:05:40.061542 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:05:40.061550 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:05:40.061556 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:05:40.061562 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:05:40.061568 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:05:40.061574 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 30 13:05:40.061580 kernel: NUMA: Failed to initialise from firmware Jan 30 13:05:40.061587 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:05:40.061593 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 30 13:05:40.061599 kernel: Zone ranges: Jan 30 13:05:40.061605 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:05:40.061613 kernel: DMA32 empty Jan 30 13:05:40.061619 kernel: Normal empty Jan 30 13:05:40.061625 kernel: Movable zone start for each node Jan 30 13:05:40.061631 kernel: Early memory node ranges Jan 30 13:05:40.061637 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jan 30 13:05:40.061643 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jan 30 13:05:40.061649 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jan 30 13:05:40.061655 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 30 13:05:40.061661 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 30 13:05:40.061667 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 30 13:05:40.061673 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 30 13:05:40.061679 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 30 13:05:40.061687 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 30 13:05:40.061693 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:05:40.061699 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 30 13:05:40.061708 kernel: psci: probing for conduit method from ACPI. Jan 30 13:05:40.061714 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 13:05:40.061721 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 13:05:40.061729 kernel: psci: Trusted OS migration not required Jan 30 13:05:40.061736 kernel: psci: SMC Calling Convention v1.1 Jan 30 13:05:40.061742 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 13:05:40.061749 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 13:05:40.061756 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 13:05:40.061762 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 30 13:05:40.061768 kernel: Detected PIPT I-cache on CPU0 Jan 30 13:05:40.061783 kernel: CPU features: detected: GIC system register CPU interface Jan 30 13:05:40.061790 kernel: CPU features: detected: Hardware dirty bit management Jan 30 13:05:40.061796 kernel: CPU features: detected: Spectre-v4 Jan 30 13:05:40.061804 kernel: CPU features: detected: Spectre-BHB Jan 30 13:05:40.061811 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 13:05:40.061817 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 13:05:40.061823 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 13:05:40.061830 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 13:05:40.061836 kernel: alternatives: applying boot alternatives Jan 30 13:05:40.061843 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:05:40.061850 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:05:40.061856 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:05:40.061863 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:05:40.061869 kernel: Fallback order for Node 0: 0 Jan 30 13:05:40.061877 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 30 13:05:40.061883 kernel: Policy zone: DMA Jan 30 13:05:40.061889 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:05:40.061896 kernel: software IO TLB: area num 4. Jan 30 13:05:40.061902 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 30 13:05:40.061909 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Jan 30 13:05:40.061915 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:05:40.061922 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:05:40.061929 kernel: rcu: RCU event tracing is enabled. Jan 30 13:05:40.061935 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:05:40.061942 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:05:40.061948 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:05:40.061957 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:05:40.061963 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:05:40.061970 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 13:05:40.061976 kernel: GICv3: 256 SPIs implemented Jan 30 13:05:40.061982 kernel: GICv3: 0 Extended SPIs implemented Jan 30 13:05:40.061989 kernel: Root IRQ handler: gic_handle_irq Jan 30 13:05:40.061995 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 13:05:40.062002 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 13:05:40.062008 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 13:05:40.062015 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 13:05:40.062022 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 13:05:40.062030 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 30 13:05:40.062037 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 30 13:05:40.062044 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:05:40.062051 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:05:40.062058 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 13:05:40.062064 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 13:05:40.062071 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 13:05:40.062078 kernel: arm-pv: using stolen time PV Jan 30 13:05:40.062085 kernel: Console: colour dummy device 80x25 Jan 30 13:05:40.062092 kernel: ACPI: Core revision 20230628 Jan 30 13:05:40.062099 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 13:05:40.062107 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:05:40.062114 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:05:40.062121 kernel: landlock: Up and running. Jan 30 13:05:40.062127 kernel: SELinux: Initializing. Jan 30 13:05:40.062134 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:05:40.062141 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:05:40.062148 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:05:40.062155 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:05:40.062162 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:05:40.062170 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:05:40.062177 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 13:05:40.062183 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 13:05:40.062190 kernel: Remapping and enabling EFI services. Jan 30 13:05:40.062197 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:05:40.062203 kernel: Detected PIPT I-cache on CPU1 Jan 30 13:05:40.062211 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 13:05:40.062217 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 30 13:05:40.062224 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:05:40.062232 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 13:05:40.062239 kernel: Detected PIPT I-cache on CPU2 Jan 30 13:05:40.062252 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 30 13:05:40.062260 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 30 13:05:40.062268 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:05:40.062274 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 30 13:05:40.062281 kernel: Detected PIPT I-cache on CPU3 Jan 30 13:05:40.062289 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 30 13:05:40.062296 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 30 13:05:40.062304 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:05:40.062311 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 30 13:05:40.062318 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:05:40.062325 kernel: SMP: Total of 4 processors activated. Jan 30 13:05:40.062333 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 13:05:40.062340 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 13:05:40.062347 kernel: CPU features: detected: Common not Private translations Jan 30 13:05:40.062354 kernel: CPU features: detected: CRC32 instructions Jan 30 13:05:40.062364 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 13:05:40.062371 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 13:05:40.062407 kernel: CPU features: detected: LSE atomic instructions Jan 30 13:05:40.062418 kernel: CPU features: detected: Privileged Access Never Jan 30 13:05:40.062431 kernel: CPU features: detected: RAS Extension Support Jan 30 13:05:40.062439 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 13:05:40.062446 kernel: CPU: All CPU(s) started at EL1 Jan 30 13:05:40.062453 kernel: alternatives: applying system-wide alternatives Jan 30 13:05:40.062460 kernel: devtmpfs: initialized Jan 30 13:05:40.062469 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:05:40.062477 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:05:40.062484 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:05:40.062491 kernel: SMBIOS 3.0.0 present. Jan 30 13:05:40.062498 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 30 13:05:40.062505 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:05:40.062513 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 13:05:40.062520 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 13:05:40.062527 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 13:05:40.062536 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:05:40.062543 kernel: audit: type=2000 audit(0.026:1): state=initialized audit_enabled=0 res=1 Jan 30 13:05:40.062550 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:05:40.062557 kernel: cpuidle: using governor menu Jan 30 13:05:40.062564 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 13:05:40.062572 kernel: ASID allocator initialised with 32768 entries Jan 30 13:05:40.062579 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:05:40.062586 kernel: Serial: AMBA PL011 UART driver Jan 30 13:05:40.062593 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 13:05:40.062602 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 13:05:40.062609 kernel: Modules: 508880 pages in range for PLT usage Jan 30 13:05:40.062616 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:05:40.062623 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:05:40.062630 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 13:05:40.062637 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 13:05:40.062645 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:05:40.062652 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:05:40.062659 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 13:05:40.062668 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 13:05:40.062675 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:05:40.062682 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:05:40.062690 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:05:40.062697 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:05:40.062705 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:05:40.062712 kernel: ACPI: Interpreter enabled Jan 30 13:05:40.062719 kernel: ACPI: Using GIC for interrupt routing Jan 30 13:05:40.062726 kernel: ACPI: MCFG table detected, 1 entries Jan 30 13:05:40.062733 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 13:05:40.062743 kernel: printk: console [ttyAMA0] enabled Jan 30 13:05:40.062750 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:05:40.062919 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:05:40.063037 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 13:05:40.063103 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 13:05:40.063170 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 13:05:40.063237 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 13:05:40.063250 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 13:05:40.063257 kernel: PCI host bridge to bus 0000:00 Jan 30 13:05:40.063329 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 13:05:40.063413 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 13:05:40.063509 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 13:05:40.063573 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:05:40.063657 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 13:05:40.063742 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:05:40.063824 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 30 13:05:40.063893 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 30 13:05:40.063958 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:05:40.064024 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:05:40.064094 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 30 13:05:40.064208 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 30 13:05:40.064273 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 13:05:40.064331 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 13:05:40.064389 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 13:05:40.064398 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 13:05:40.064406 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 13:05:40.064413 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 13:05:40.064420 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 13:05:40.064438 kernel: iommu: Default domain type: Translated Jan 30 13:05:40.064445 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 13:05:40.064452 kernel: efivars: Registered efivars operations Jan 30 13:05:40.064459 kernel: vgaarb: loaded Jan 30 13:05:40.064466 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 13:05:40.064473 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:05:40.064480 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:05:40.064487 kernel: pnp: PnP ACPI init Jan 30 13:05:40.064564 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 13:05:40.064577 kernel: pnp: PnP ACPI: found 1 devices Jan 30 13:05:40.064585 kernel: NET: Registered PF_INET protocol family Jan 30 13:05:40.064592 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:05:40.064599 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:05:40.064606 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:05:40.064613 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:05:40.064621 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:05:40.064628 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:05:40.064638 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:05:40.064645 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:05:40.064652 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:05:40.064659 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:05:40.064666 kernel: kvm [1]: HYP mode not available Jan 30 13:05:40.064673 kernel: Initialise system trusted keyrings Jan 30 13:05:40.064680 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:05:40.064687 kernel: Key type asymmetric registered Jan 30 13:05:40.064694 kernel: Asymmetric key parser 'x509' registered Jan 30 13:05:40.064703 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 13:05:40.064710 kernel: io scheduler mq-deadline registered Jan 30 13:05:40.064716 kernel: io scheduler kyber registered Jan 30 13:05:40.064723 kernel: io scheduler bfq registered Jan 30 13:05:40.064730 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 13:05:40.064737 kernel: ACPI: button: Power Button [PWRB] Jan 30 13:05:40.064745 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 13:05:40.064826 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 30 13:05:40.064837 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:05:40.064848 kernel: thunder_xcv, ver 1.0 Jan 30 13:05:40.064855 kernel: thunder_bgx, ver 1.0 Jan 30 13:05:40.064862 kernel: nicpf, ver 1.0 Jan 30 13:05:40.064869 kernel: nicvf, ver 1.0 Jan 30 13:05:40.064952 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 13:05:40.065017 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T13:05:39 UTC (1738242339) Jan 30 13:05:40.065027 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:05:40.065034 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 13:05:40.065044 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 13:05:40.065051 kernel: watchdog: Hard watchdog permanently disabled Jan 30 13:05:40.065058 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:05:40.065065 kernel: Segment Routing with IPv6 Jan 30 13:05:40.065072 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:05:40.065079 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:05:40.065086 kernel: Key type dns_resolver registered Jan 30 13:05:40.065093 kernel: registered taskstats version 1 Jan 30 13:05:40.065100 kernel: Loading compiled-in X.509 certificates Jan 30 13:05:40.065107 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 30 13:05:40.065116 kernel: Key type .fscrypt registered Jan 30 13:05:40.065123 kernel: Key type fscrypt-provisioning registered Jan 30 13:05:40.065130 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:05:40.065137 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:05:40.065144 kernel: ima: No architecture policies found Jan 30 13:05:40.065151 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 13:05:40.065158 kernel: clk: Disabling unused clocks Jan 30 13:05:40.065165 kernel: Freeing unused kernel memory: 39936K Jan 30 13:05:40.065174 kernel: Run /init as init process Jan 30 13:05:40.065181 kernel: with arguments: Jan 30 13:05:40.065188 kernel: /init Jan 30 13:05:40.065194 kernel: with environment: Jan 30 13:05:40.065201 kernel: HOME=/ Jan 30 13:05:40.065209 kernel: TERM=linux Jan 30 13:05:40.065215 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:05:40.065225 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:05:40.065235 systemd[1]: Detected virtualization kvm. Jan 30 13:05:40.065243 systemd[1]: Detected architecture arm64. Jan 30 13:05:40.065251 systemd[1]: Running in initrd. Jan 30 13:05:40.065258 systemd[1]: No hostname configured, using default hostname. Jan 30 13:05:40.065265 systemd[1]: Hostname set to . Jan 30 13:05:40.065273 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:05:40.065281 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:05:40.065288 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:05:40.065298 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:05:40.065306 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:05:40.065313 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:05:40.065321 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:05:40.065329 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:05:40.065338 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:05:40.065346 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:05:40.065356 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:05:40.065364 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:05:40.065372 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:05:40.065379 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:05:40.065387 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:05:40.065394 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:05:40.065402 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:05:40.065409 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:05:40.065419 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:05:40.065437 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:05:40.065459 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:05:40.065467 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:05:40.065475 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:05:40.065483 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:05:40.065490 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:05:40.065498 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:05:40.065506 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:05:40.065516 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:05:40.065524 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:05:40.065541 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:05:40.065549 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:05:40.065557 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:05:40.065565 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:05:40.065572 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:05:40.065582 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:05:40.065612 systemd-journald[238]: Collecting audit messages is disabled. Jan 30 13:05:40.065634 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:05:40.065644 systemd-journald[238]: Journal started Jan 30 13:05:40.065669 systemd-journald[238]: Runtime Journal (/run/log/journal/59f637a227334d7a91945d180e9e76d8) is 5.9M, max 47.3M, 41.4M free. Jan 30 13:05:40.055860 systemd-modules-load[240]: Inserted module 'overlay' Jan 30 13:05:40.068865 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:05:40.071102 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:05:40.071125 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:05:40.076716 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 30 13:05:40.077749 kernel: Bridge firewalling registered Jan 30 13:05:40.083632 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:05:40.085599 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:05:40.087973 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:05:40.092792 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:05:40.098675 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:05:40.100711 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:05:40.104209 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:05:40.108871 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:05:40.111314 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:05:40.112672 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:05:40.116340 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:05:40.126603 dracut-cmdline[274]: dracut-dracut-053 Jan 30 13:05:40.129436 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:05:40.146471 systemd-resolved[277]: Positive Trust Anchors: Jan 30 13:05:40.146489 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:05:40.146521 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:05:40.152191 systemd-resolved[277]: Defaulting to hostname 'linux'. Jan 30 13:05:40.155255 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:05:40.157728 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:05:40.208466 kernel: SCSI subsystem initialized Jan 30 13:05:40.213447 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:05:40.221463 kernel: iscsi: registered transport (tcp) Jan 30 13:05:40.235449 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:05:40.235483 kernel: QLogic iSCSI HBA Driver Jan 30 13:05:40.289512 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:05:40.300688 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:05:40.322440 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:05:40.322538 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:05:40.322551 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:05:40.376480 kernel: raid6: neonx8 gen() 15720 MB/s Jan 30 13:05:40.393464 kernel: raid6: neonx4 gen() 15609 MB/s Jan 30 13:05:40.410458 kernel: raid6: neonx2 gen() 13187 MB/s Jan 30 13:05:40.427450 kernel: raid6: neonx1 gen() 10523 MB/s Jan 30 13:05:40.444452 kernel: raid6: int64x8 gen() 6785 MB/s Jan 30 13:05:40.461454 kernel: raid6: int64x4 gen() 7302 MB/s Jan 30 13:05:40.478454 kernel: raid6: int64x2 gen() 6060 MB/s Jan 30 13:05:40.495678 kernel: raid6: int64x1 gen() 5041 MB/s Jan 30 13:05:40.495695 kernel: raid6: using algorithm neonx8 gen() 15720 MB/s Jan 30 13:05:40.513607 kernel: raid6: .... xor() 12023 MB/s, rmw enabled Jan 30 13:05:40.513619 kernel: raid6: using neon recovery algorithm Jan 30 13:05:40.519448 kernel: xor: measuring software checksum speed Jan 30 13:05:40.519466 kernel: 8regs : 20676 MB/sec Jan 30 13:05:40.519483 kernel: 32regs : 19464 MB/sec Jan 30 13:05:40.520811 kernel: arm64_neon : 27663 MB/sec Jan 30 13:05:40.520823 kernel: xor: using function: arm64_neon (27663 MB/sec) Jan 30 13:05:40.573459 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:05:40.587509 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:05:40.599651 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:05:40.612276 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jan 30 13:05:40.616258 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:05:40.629690 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:05:40.647495 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jan 30 13:05:40.681250 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:05:40.692632 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:05:40.738483 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:05:40.746670 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:05:40.765856 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:05:40.768121 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:05:40.770138 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:05:40.772950 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:05:40.783635 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:05:40.795329 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:05:40.797690 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 30 13:05:40.815664 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:05:40.815790 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:05:40.815803 kernel: GPT:9289727 != 19775487 Jan 30 13:05:40.815820 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:05:40.815830 kernel: GPT:9289727 != 19775487 Jan 30 13:05:40.815840 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:05:40.815849 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:05:40.819074 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:05:40.819198 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:05:40.822818 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:05:40.824142 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:05:40.824282 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:05:40.826682 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:05:40.837692 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:05:40.844452 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (528) Jan 30 13:05:40.844507 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (516) Jan 30 13:05:40.849538 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:05:40.857010 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:05:40.859879 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:05:40.868298 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:05:40.872572 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:05:40.873965 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:05:40.888619 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:05:40.891340 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:05:40.897719 disk-uuid[555]: Primary Header is updated. Jan 30 13:05:40.897719 disk-uuid[555]: Secondary Entries is updated. Jan 30 13:05:40.897719 disk-uuid[555]: Secondary Header is updated. Jan 30 13:05:40.902493 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:05:40.914465 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:05:41.921406 disk-uuid[556]: The operation has completed successfully. Jan 30 13:05:41.922656 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:05:41.948300 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:05:41.948409 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:05:41.969646 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:05:41.974940 sh[576]: Success Jan 30 13:05:41.994483 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 13:05:42.025644 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:05:42.036890 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:05:42.039284 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:05:42.051182 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 30 13:05:42.051230 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:05:42.051241 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:05:42.053108 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:05:42.053127 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:05:42.057224 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:05:42.058880 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:05:42.059648 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:05:42.062622 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:05:42.074893 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:05:42.074956 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:05:42.074966 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:05:42.078456 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:05:42.086674 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:05:42.089458 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:05:42.095789 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:05:42.103619 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:05:42.185554 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:05:42.196652 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:05:42.221420 systemd-networkd[764]: lo: Link UP Jan 30 13:05:42.221442 systemd-networkd[764]: lo: Gained carrier Jan 30 13:05:42.222334 systemd-networkd[764]: Enumeration completed Jan 30 13:05:42.222483 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:05:42.223250 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:05:42.223253 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:05:42.224244 systemd-networkd[764]: eth0: Link UP Jan 30 13:05:42.224247 systemd-networkd[764]: eth0: Gained carrier Jan 30 13:05:42.224255 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:05:42.232270 ignition[671]: Ignition 2.20.0 Jan 30 13:05:42.224565 systemd[1]: Reached target network.target - Network. Jan 30 13:05:42.232276 ignition[671]: Stage: fetch-offline Jan 30 13:05:42.232317 ignition[671]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:05:42.232325 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:05:42.232501 ignition[671]: parsed url from cmdline: "" Jan 30 13:05:42.232505 ignition[671]: no config URL provided Jan 30 13:05:42.232509 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:05:42.232516 ignition[671]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:05:42.232548 ignition[671]: op(1): [started] loading QEMU firmware config module Jan 30 13:05:42.242504 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:05:42.232553 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:05:42.241297 ignition[671]: op(1): [finished] loading QEMU firmware config module Jan 30 13:05:42.266093 ignition[671]: parsing config with SHA512: 02a4ec88c6f2060e261a19f74cb8b15d4658ee1a2f2008e7569a8106d0425bbc3df18055241270839a13690b337f532056a1b3b1e1bfc91c0f2dfa4b8c367a8c Jan 30 13:05:42.270718 unknown[671]: fetched base config from "system" Jan 30 13:05:42.270727 unknown[671]: fetched user config from "qemu" Jan 30 13:05:42.271112 ignition[671]: fetch-offline: fetch-offline passed Jan 30 13:05:42.273201 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:05:42.271189 ignition[671]: Ignition finished successfully Jan 30 13:05:42.274668 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:05:42.283642 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:05:42.294691 ignition[775]: Ignition 2.20.0 Jan 30 13:05:42.294701 ignition[775]: Stage: kargs Jan 30 13:05:42.294897 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:05:42.294907 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:05:42.298821 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:05:42.295798 ignition[775]: kargs: kargs passed Jan 30 13:05:42.295850 ignition[775]: Ignition finished successfully Jan 30 13:05:42.306630 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:05:42.317534 ignition[784]: Ignition 2.20.0 Jan 30 13:05:42.317551 ignition[784]: Stage: disks Jan 30 13:05:42.317765 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:05:42.320767 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:05:42.317787 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:05:42.322115 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:05:42.318691 ignition[784]: disks: disks passed Jan 30 13:05:42.323882 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:05:42.318745 ignition[784]: Ignition finished successfully Jan 30 13:05:42.326182 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:05:42.328297 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:05:42.329986 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:05:42.343634 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:05:42.370198 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:05:42.421713 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:05:42.434556 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:05:42.501448 kernel: EXT4-fs (vda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 30 13:05:42.501634 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:05:42.503126 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:05:42.515571 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:05:42.517697 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:05:42.519241 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:05:42.519303 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:05:42.519331 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:05:42.528860 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (803) Jan 30 13:05:42.528652 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:05:42.533416 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:05:42.533452 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:05:42.533464 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:05:42.531833 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:05:42.538592 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:05:42.539753 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:05:42.584899 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:05:42.589198 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:05:42.592870 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:05:42.596840 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:05:42.688602 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:05:42.702580 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:05:42.705388 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:05:42.710454 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:05:42.735022 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:05:42.758025 ignition[922]: INFO : Ignition 2.20.0 Jan 30 13:05:42.758025 ignition[922]: INFO : Stage: mount Jan 30 13:05:42.759852 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:05:42.759852 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:05:42.759852 ignition[922]: INFO : mount: mount passed Jan 30 13:05:42.759852 ignition[922]: INFO : Ignition finished successfully Jan 30 13:05:42.761760 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:05:42.773572 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:05:43.049853 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:05:43.061705 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:05:43.081162 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (931) Jan 30 13:05:43.081220 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:05:43.081231 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:05:43.082120 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:05:43.085451 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:05:43.086475 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:05:43.114458 ignition[948]: INFO : Ignition 2.20.0 Jan 30 13:05:43.114458 ignition[948]: INFO : Stage: files Jan 30 13:05:43.116015 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:05:43.116015 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:05:43.116015 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:05:43.119973 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:05:43.119973 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:05:43.119973 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:05:43.119973 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:05:43.119973 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:05:43.119367 unknown[948]: wrote ssh authorized keys file for user: core Jan 30 13:05:43.127897 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:05:43.127897 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 13:05:43.166691 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:05:43.350520 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:05:43.350520 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:05:43.354151 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:05:43.354151 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:05:43.354151 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:05:43.354151 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:05:43.354151 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:05:43.354151 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:05:43.354151 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:05:43.354151 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:05:43.354151 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:05:43.354151 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:05:43.354151 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:05:43.354151 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:05:43.354151 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 30 13:05:43.677866 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:05:43.719932 systemd-networkd[764]: eth0: Gained IPv6LL Jan 30 13:05:43.927289 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 30 13:05:43.927289 ignition[948]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:05:43.932254 ignition[948]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:05:43.934192 ignition[948]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:05:43.934192 ignition[948]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:05:43.934192 ignition[948]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 30 13:05:43.934192 ignition[948]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:05:43.934192 ignition[948]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:05:43.934192 ignition[948]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 30 13:05:43.934192 ignition[948]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:05:43.977556 ignition[948]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:05:43.982276 ignition[948]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:05:43.984052 ignition[948]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:05:43.984052 ignition[948]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:05:43.984052 ignition[948]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:05:43.984052 ignition[948]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:05:43.984052 ignition[948]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:05:43.984052 ignition[948]: INFO : files: files passed Jan 30 13:05:43.984052 ignition[948]: INFO : Ignition finished successfully Jan 30 13:05:43.984552 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:05:44.008651 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:05:44.010937 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:05:44.014197 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:05:44.015371 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:05:44.022649 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:05:44.027081 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:05:44.027081 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:05:44.030743 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:05:44.030351 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:05:44.032623 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:05:44.047696 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:05:44.077492 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:05:44.077672 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:05:44.080193 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:05:44.082380 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:05:44.084492 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:05:44.085530 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:05:44.108049 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:05:44.122072 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:05:44.131420 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:05:44.132791 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:05:44.134952 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:05:44.137144 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:05:44.137316 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:05:44.139798 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:05:44.141888 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:05:44.146042 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:05:44.147292 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:05:44.149316 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:05:44.151629 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:05:44.153386 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:05:44.155490 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:05:44.157533 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:05:44.159504 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:05:44.161209 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:05:44.161354 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:05:44.169186 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:05:44.171271 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:05:44.173205 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:05:44.176514 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:05:44.177738 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:05:44.177890 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:05:44.180864 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:05:44.181120 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:05:44.183190 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:05:44.184785 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:05:44.185889 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:05:44.187421 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:05:44.189119 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:05:44.191152 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:05:44.191259 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:05:44.193462 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:05:44.193562 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:05:44.195313 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:05:44.195450 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:05:44.197568 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:05:44.197687 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:05:44.213683 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:05:44.214729 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:05:44.214910 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:05:44.221950 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:05:44.223800 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:05:44.224001 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:05:44.226376 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:05:44.229364 ignition[1002]: INFO : Ignition 2.20.0 Jan 30 13:05:44.229364 ignition[1002]: INFO : Stage: umount Jan 30 13:05:44.229364 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:05:44.229364 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:05:44.229364 ignition[1002]: INFO : umount: umount passed Jan 30 13:05:44.229364 ignition[1002]: INFO : Ignition finished successfully Jan 30 13:05:44.226612 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:05:44.231322 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:05:44.231423 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:05:44.238035 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:05:44.239475 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:05:44.246480 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:05:44.247541 systemd[1]: Stopped target network.target - Network. Jan 30 13:05:44.248758 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:05:44.248854 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:05:44.250669 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:05:44.250736 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:05:44.252909 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:05:44.252965 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:05:44.254923 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:05:44.254985 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:05:44.257322 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:05:44.259403 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:05:44.265878 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:05:44.266003 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:05:44.267481 systemd-networkd[764]: eth0: DHCPv6 lease lost Jan 30 13:05:44.269009 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:05:44.269077 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:05:44.272054 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:05:44.272190 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:05:44.274729 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:05:44.274889 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:05:44.283619 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:05:44.284682 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:05:44.284760 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:05:44.288681 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:05:44.288754 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:05:44.290940 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:05:44.291008 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:05:44.293514 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:05:44.297271 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:05:44.297379 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:05:44.300819 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:05:44.300891 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:05:44.313815 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:05:44.314491 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:05:44.316701 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:05:44.316910 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:05:44.319865 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:05:44.319997 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:05:44.321958 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:05:44.321999 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:05:44.324248 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:05:44.324312 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:05:44.327448 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:05:44.327520 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:05:44.330706 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:05:44.330785 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:05:44.344643 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:05:44.345847 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:05:44.345929 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:05:44.348377 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:05:44.348451 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:05:44.351053 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:05:44.352463 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:05:44.354705 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:05:44.358983 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:05:44.370892 systemd[1]: Switching root. Jan 30 13:05:44.401377 systemd-journald[238]: Journal stopped Jan 30 13:05:45.316817 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 30 13:05:45.316884 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:05:45.316901 kernel: SELinux: policy capability open_perms=1 Jan 30 13:05:45.316916 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:05:45.316926 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:05:45.316936 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:05:45.316946 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:05:45.316955 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:05:45.316964 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:05:45.316974 kernel: audit: type=1403 audit(1738242344.558:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:05:45.316985 systemd[1]: Successfully loaded SELinux policy in 33.797ms. Jan 30 13:05:45.317008 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.525ms. Jan 30 13:05:45.317020 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:05:45.317030 systemd[1]: Detected virtualization kvm. Jan 30 13:05:45.317041 systemd[1]: Detected architecture arm64. Jan 30 13:05:45.317054 systemd[1]: Detected first boot. Jan 30 13:05:45.317064 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:05:45.317075 zram_generator::config[1048]: No configuration found. Jan 30 13:05:45.317087 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:05:45.317098 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:05:45.317110 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:05:45.317121 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:05:45.317132 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:05:45.317143 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:05:45.317153 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:05:45.317163 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:05:45.317173 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:05:45.317184 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:05:45.317196 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:05:45.317207 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:05:45.317217 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:05:45.317228 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:05:45.317238 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:05:45.317249 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:05:45.317259 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:05:45.317270 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:05:45.317281 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 13:05:45.317293 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:05:45.317304 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:05:45.317314 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:05:45.317324 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:05:45.317335 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:05:45.317346 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:05:45.317356 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:05:45.317366 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:05:45.317383 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:05:45.317394 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:05:45.317404 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:05:45.317414 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:05:45.317432 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:05:45.317444 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:05:45.317455 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:05:45.317466 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:05:45.317476 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:05:45.317489 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:05:45.317500 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:05:45.317510 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:05:45.317522 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:05:45.317534 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:05:45.317648 systemd[1]: Reached target machines.target - Containers. Jan 30 13:05:45.317675 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:05:45.317686 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:05:45.317701 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:05:45.317713 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:05:45.317723 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:05:45.317733 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:05:45.317744 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:05:45.317755 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:05:45.317766 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:05:45.317784 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:05:45.317795 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:05:45.317808 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:05:45.317824 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:05:45.317835 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:05:45.317845 kernel: fuse: init (API version 7.39) Jan 30 13:05:45.317854 kernel: loop: module loaded Jan 30 13:05:45.317864 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:05:45.317874 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:05:45.317887 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:05:45.317897 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:05:45.317909 kernel: ACPI: bus type drm_connector registered Jan 30 13:05:45.317919 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:05:45.317930 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:05:45.317940 systemd[1]: Stopped verity-setup.service. Jan 30 13:05:45.317951 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:05:45.317961 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:05:45.318000 systemd-journald[1115]: Collecting audit messages is disabled. Jan 30 13:05:45.318021 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:05:45.318035 systemd-journald[1115]: Journal started Jan 30 13:05:45.318064 systemd-journald[1115]: Runtime Journal (/run/log/journal/59f637a227334d7a91945d180e9e76d8) is 5.9M, max 47.3M, 41.4M free. Jan 30 13:05:45.318110 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:05:45.040019 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:05:45.063671 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:05:45.064070 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:05:45.322386 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:05:45.323201 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:05:45.324686 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:05:45.326090 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:05:45.327763 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:05:45.329473 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:05:45.329653 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:05:45.331225 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:05:45.331388 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:05:45.332951 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:05:45.333102 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:05:45.334586 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:05:45.334744 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:05:45.336510 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:05:45.336652 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:05:45.338078 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:05:45.338219 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:05:45.339961 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:05:45.341515 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:05:45.343288 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:05:45.358288 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:05:45.373582 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:05:45.376587 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:05:45.378073 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:05:45.378150 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:05:45.381117 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:05:45.383848 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:05:45.386508 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:05:45.387876 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:05:45.389884 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:05:45.392407 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:05:45.393854 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:05:45.395138 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:05:45.396555 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:05:45.397953 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:05:45.403717 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:05:45.408701 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:05:45.412564 systemd-journald[1115]: Time spent on flushing to /var/log/journal/59f637a227334d7a91945d180e9e76d8 is 15.570ms for 857 entries. Jan 30 13:05:45.412564 systemd-journald[1115]: System Journal (/var/log/journal/59f637a227334d7a91945d180e9e76d8) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:05:45.445309 systemd-journald[1115]: Received client request to flush runtime journal. Jan 30 13:05:45.445349 kernel: loop0: detected capacity change from 0 to 189592 Jan 30 13:05:45.412668 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:05:45.415692 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:05:45.417415 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:05:45.420510 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:05:45.438963 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:05:45.444022 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:05:45.446110 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:05:45.447980 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:05:45.451931 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:05:45.451840 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:05:45.458295 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:05:45.463726 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:05:45.479821 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:05:45.480810 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:05:45.501205 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:05:45.508609 kernel: loop1: detected capacity change from 0 to 116784 Jan 30 13:05:45.510741 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:05:45.543549 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 30 13:05:45.543567 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 30 13:05:45.549699 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:05:45.560461 kernel: loop2: detected capacity change from 0 to 113552 Jan 30 13:05:45.603496 kernel: loop3: detected capacity change from 0 to 189592 Jan 30 13:05:45.616694 kernel: loop4: detected capacity change from 0 to 116784 Jan 30 13:05:45.628637 kernel: loop5: detected capacity change from 0 to 113552 Jan 30 13:05:45.632250 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:05:45.632713 (sd-merge)[1184]: Merged extensions into '/usr'. Jan 30 13:05:45.636003 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:05:45.636018 systemd[1]: Reloading... Jan 30 13:05:45.689279 zram_generator::config[1206]: No configuration found. Jan 30 13:05:45.802370 ldconfig[1154]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:05:45.803748 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:05:45.842282 systemd[1]: Reloading finished in 205 ms. Jan 30 13:05:45.878476 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:05:45.880087 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:05:45.897659 systemd[1]: Starting ensure-sysext.service... Jan 30 13:05:45.900043 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:05:45.916199 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:05:45.916219 systemd[1]: Reloading... Jan 30 13:05:45.970466 zram_generator::config[1271]: No configuration found. Jan 30 13:05:46.001839 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:05:46.002070 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:05:46.002753 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:05:46.002980 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 30 13:05:46.003034 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 30 13:05:46.006113 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:05:46.006127 systemd-tmpfiles[1245]: Skipping /boot Jan 30 13:05:46.014898 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:05:46.014916 systemd-tmpfiles[1245]: Skipping /boot Jan 30 13:05:46.084882 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:05:46.121975 systemd[1]: Reloading finished in 205 ms. Jan 30 13:05:46.145544 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:05:46.158576 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:05:46.161580 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:05:46.166368 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:05:46.170036 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:05:46.172790 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:05:46.180291 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:05:46.191595 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:05:46.207684 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:05:46.211823 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:05:46.216486 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:05:46.220756 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:05:46.225896 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:05:46.230300 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:05:46.234478 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:05:46.236575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:05:46.236718 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:05:46.238760 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:05:46.238945 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:05:46.242152 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:05:46.242309 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:05:46.248153 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:05:46.257686 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:05:46.271951 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:05:46.275849 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Jan 30 13:05:46.280970 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:05:46.291796 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:05:46.295043 augenrules[1344]: No rules Jan 30 13:05:46.295653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:05:46.297458 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:05:46.299472 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:05:46.301453 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:05:46.301637 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:05:46.303241 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:05:46.305381 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:05:46.307463 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:05:46.309712 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:05:46.309910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:05:46.311894 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:05:46.312057 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:05:46.322633 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:05:46.324776 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:05:46.332108 systemd[1]: Finished ensure-sysext.service. Jan 30 13:05:46.343967 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:05:46.346261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:05:46.347603 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:05:46.351067 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:05:46.355742 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:05:46.364503 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:05:46.365764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:05:46.372866 systemd-resolved[1311]: Positive Trust Anchors: Jan 30 13:05:46.372885 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:05:46.372919 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:05:46.373693 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:05:46.380850 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:05:46.382051 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:05:46.382691 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:05:46.382899 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:05:46.384600 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:05:46.386469 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:05:46.401139 systemd-resolved[1311]: Defaulting to hostname 'linux'. Jan 30 13:05:46.406639 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:05:46.406820 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:05:46.410846 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:05:46.415194 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 13:05:46.415496 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:05:46.417493 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:05:46.420696 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:05:46.423322 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:05:46.423404 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:05:46.435972 augenrules[1376]: /sbin/augenrules: No change Jan 30 13:05:46.462377 augenrules[1415]: No rules Jan 30 13:05:46.463909 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:05:46.466937 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:05:46.473289 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:05:46.474659 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:05:46.491491 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1365) Jan 30 13:05:46.517617 systemd-networkd[1389]: lo: Link UP Jan 30 13:05:46.517626 systemd-networkd[1389]: lo: Gained carrier Jan 30 13:05:46.520708 systemd-networkd[1389]: Enumeration completed Jan 30 13:05:46.520858 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:05:46.522788 systemd[1]: Reached target network.target - Network. Jan 30 13:05:46.531250 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:05:46.531260 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:05:46.532094 systemd-networkd[1389]: eth0: Link UP Jan 30 13:05:46.532099 systemd-networkd[1389]: eth0: Gained carrier Jan 30 13:05:46.532115 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:05:46.536714 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:05:46.542596 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:05:46.545948 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:05:46.551762 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:05:46.565521 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:05:46.565527 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:05:46.569898 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Jan 30 13:05:47.038707 systemd-timesyncd[1393]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:05:47.038792 systemd-timesyncd[1393]: Initial clock synchronization to Thu 2025-01-30 13:05:47.038570 UTC. Jan 30 13:05:47.038850 systemd-resolved[1311]: Clock change detected. Flushing caches. Jan 30 13:05:47.054965 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:05:47.059605 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:05:47.093366 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:05:47.112331 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:05:47.152294 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:05:47.154900 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:05:47.156298 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:05:47.158616 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:05:47.160168 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:05:47.162233 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:05:47.163748 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:05:47.165232 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:05:47.166619 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:05:47.166654 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:05:47.167635 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:05:47.175618 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:05:47.179296 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:05:47.189968 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:05:47.198124 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:05:47.202062 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:05:47.203627 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:05:47.204803 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:05:47.205895 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:05:47.205974 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:05:47.207249 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:05:47.209942 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:05:47.212929 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:05:47.215001 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:05:47.221078 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:05:47.224996 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:05:47.227049 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:05:47.237224 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:05:47.239763 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:05:47.247116 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:05:47.258469 jq[1444]: false Jan 30 13:05:47.277237 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:05:47.295550 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:05:47.296220 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:05:47.298974 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:05:47.301412 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:05:47.303725 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:05:47.307271 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:05:47.307455 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:05:47.310206 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:05:47.311958 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:05:47.315963 jq[1460]: true Jan 30 13:05:47.335444 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:05:47.336984 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:05:47.338043 dbus-daemon[1443]: [system] SELinux support is enabled Jan 30 13:05:47.340402 extend-filesystems[1445]: Found loop3 Jan 30 13:05:47.340402 extend-filesystems[1445]: Found loop4 Jan 30 13:05:47.340402 extend-filesystems[1445]: Found loop5 Jan 30 13:05:47.340402 extend-filesystems[1445]: Found vda Jan 30 13:05:47.340402 extend-filesystems[1445]: Found vda1 Jan 30 13:05:47.340402 extend-filesystems[1445]: Found vda2 Jan 30 13:05:47.340402 extend-filesystems[1445]: Found vda3 Jan 30 13:05:47.340402 extend-filesystems[1445]: Found usr Jan 30 13:05:47.355564 extend-filesystems[1445]: Found vda4 Jan 30 13:05:47.355564 extend-filesystems[1445]: Found vda6 Jan 30 13:05:47.355564 extend-filesystems[1445]: Found vda7 Jan 30 13:05:47.355564 extend-filesystems[1445]: Found vda9 Jan 30 13:05:47.355564 extend-filesystems[1445]: Checking size of /dev/vda9 Jan 30 13:05:47.342009 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:05:47.347700 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:05:47.365088 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:05:47.365339 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:05:47.367574 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:05:47.367658 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:05:47.391577 jq[1465]: true Jan 30 13:05:47.441366 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1381) Jan 30 13:05:47.441484 extend-filesystems[1445]: Resized partition /dev/vda9 Jan 30 13:05:47.457739 tar[1463]: linux-arm64/helm Jan 30 13:05:47.467411 extend-filesystems[1481]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:05:47.474802 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:05:47.501802 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:05:47.532945 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 13:05:47.538855 systemd-logind[1453]: New seat seat0. Jan 30 13:05:47.551449 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:05:47.557062 extend-filesystems[1481]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:05:47.557062 extend-filesystems[1481]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:05:47.557062 extend-filesystems[1481]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:05:47.565236 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Jan 30 13:05:47.573246 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:05:47.575764 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:05:47.622653 update_engine[1459]: I20250130 13:05:47.622448 1459 main.cc:92] Flatcar Update Engine starting Jan 30 13:05:47.645012 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:05:47.645883 update_engine[1459]: I20250130 13:05:47.645828 1459 update_check_scheduler.cc:74] Next update check in 10m5s Jan 30 13:05:47.646187 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:05:47.664271 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:05:47.666793 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:05:47.672436 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:05:47.779037 locksmithd[1505]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:05:47.865338 containerd[1468]: time="2025-01-30T13:05:47.865194146Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:05:47.892925 containerd[1468]: time="2025-01-30T13:05:47.892874986Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:05:47.894467 containerd[1468]: time="2025-01-30T13:05:47.894422466Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:05:47.894501 containerd[1468]: time="2025-01-30T13:05:47.894468186Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:05:47.894501 containerd[1468]: time="2025-01-30T13:05:47.894490666Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:05:47.894679 containerd[1468]: time="2025-01-30T13:05:47.894659346Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:05:47.894706 containerd[1468]: time="2025-01-30T13:05:47.894682826Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:05:47.894763 containerd[1468]: time="2025-01-30T13:05:47.894743946Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:05:47.894798 containerd[1468]: time="2025-01-30T13:05:47.894761626Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:05:47.894991 containerd[1468]: time="2025-01-30T13:05:47.894968106Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:05:47.895025 containerd[1468]: time="2025-01-30T13:05:47.894989146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:05:47.895025 containerd[1468]: time="2025-01-30T13:05:47.895003506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:05:47.895025 containerd[1468]: time="2025-01-30T13:05:47.895013346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:05:47.895265 containerd[1468]: time="2025-01-30T13:05:47.895242146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:05:47.895511 containerd[1468]: time="2025-01-30T13:05:47.895490466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:05:47.895626 containerd[1468]: time="2025-01-30T13:05:47.895607066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:05:47.895626 containerd[1468]: time="2025-01-30T13:05:47.895624906Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:05:47.895742 containerd[1468]: time="2025-01-30T13:05:47.895723986Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:05:47.895816 containerd[1468]: time="2025-01-30T13:05:47.895799586Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:05:47.901399 containerd[1468]: time="2025-01-30T13:05:47.901356866Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:05:47.901451 containerd[1468]: time="2025-01-30T13:05:47.901431226Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:05:47.901478 containerd[1468]: time="2025-01-30T13:05:47.901458626Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:05:47.901514 containerd[1468]: time="2025-01-30T13:05:47.901478546Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:05:47.901514 containerd[1468]: time="2025-01-30T13:05:47.901496026Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:05:47.901749 containerd[1468]: time="2025-01-30T13:05:47.901726306Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:05:47.902050 containerd[1468]: time="2025-01-30T13:05:47.902028746Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:05:47.902191 containerd[1468]: time="2025-01-30T13:05:47.902173026Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:05:47.902216 containerd[1468]: time="2025-01-30T13:05:47.902197186Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:05:47.902245 containerd[1468]: time="2025-01-30T13:05:47.902214146Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:05:47.902245 containerd[1468]: time="2025-01-30T13:05:47.902228746Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:05:47.902279 containerd[1468]: time="2025-01-30T13:05:47.902243106Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:05:47.902279 containerd[1468]: time="2025-01-30T13:05:47.902257226Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:05:47.902279 containerd[1468]: time="2025-01-30T13:05:47.902270866Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:05:47.902326 containerd[1468]: time="2025-01-30T13:05:47.902285866Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:05:47.902326 containerd[1468]: time="2025-01-30T13:05:47.902299906Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:05:47.902326 containerd[1468]: time="2025-01-30T13:05:47.902314026Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:05:47.902378 containerd[1468]: time="2025-01-30T13:05:47.902326586Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:05:47.902378 containerd[1468]: time="2025-01-30T13:05:47.902348666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:05:47.902378 containerd[1468]: time="2025-01-30T13:05:47.902364586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:05:47.902429 containerd[1468]: time="2025-01-30T13:05:47.902378906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:05:47.902429 containerd[1468]: time="2025-01-30T13:05:47.902391306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:05:47.902429 containerd[1468]: time="2025-01-30T13:05:47.902407106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:05:47.902429 containerd[1468]: time="2025-01-30T13:05:47.902422906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:05:47.902498 containerd[1468]: time="2025-01-30T13:05:47.902442586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:05:47.902498 containerd[1468]: time="2025-01-30T13:05:47.902457346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:05:47.902498 containerd[1468]: time="2025-01-30T13:05:47.902471866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:05:47.902498 containerd[1468]: time="2025-01-30T13:05:47.902485986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:05:47.902580 containerd[1468]: time="2025-01-30T13:05:47.902499106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:05:47.902580 containerd[1468]: time="2025-01-30T13:05:47.902512586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:05:47.902580 containerd[1468]: time="2025-01-30T13:05:47.902527426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:05:47.902580 containerd[1468]: time="2025-01-30T13:05:47.902543946Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:05:47.902580 containerd[1468]: time="2025-01-30T13:05:47.902578506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:05:47.902669 containerd[1468]: time="2025-01-30T13:05:47.902594706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:05:47.902669 containerd[1468]: time="2025-01-30T13:05:47.902606306Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:05:47.902986 containerd[1468]: time="2025-01-30T13:05:47.902969146Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:05:47.903021 containerd[1468]: time="2025-01-30T13:05:47.902995546Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:05:47.903021 containerd[1468]: time="2025-01-30T13:05:47.903009266Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:05:47.903058 containerd[1468]: time="2025-01-30T13:05:47.903022706Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:05:47.903058 containerd[1468]: time="2025-01-30T13:05:47.903034026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:05:47.903058 containerd[1468]: time="2025-01-30T13:05:47.903047466Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:05:47.903121 containerd[1468]: time="2025-01-30T13:05:47.903057826Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:05:47.903121 containerd[1468]: time="2025-01-30T13:05:47.903069426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:05:47.903638 containerd[1468]: time="2025-01-30T13:05:47.903582546Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:05:47.903762 containerd[1468]: time="2025-01-30T13:05:47.903649466Z" level=info msg="Connect containerd service" Jan 30 13:05:47.903762 containerd[1468]: time="2025-01-30T13:05:47.903692306Z" level=info msg="using legacy CRI server" Jan 30 13:05:47.903762 containerd[1468]: time="2025-01-30T13:05:47.903700226Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:05:47.904183 containerd[1468]: time="2025-01-30T13:05:47.904161826Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:05:47.905218 containerd[1468]: time="2025-01-30T13:05:47.905175146Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:05:47.905517 containerd[1468]: time="2025-01-30T13:05:47.905472786Z" level=info msg="Start subscribing containerd event" Jan 30 13:05:47.905557 containerd[1468]: time="2025-01-30T13:05:47.905547426Z" level=info msg="Start recovering state" Jan 30 13:05:47.905655 containerd[1468]: time="2025-01-30T13:05:47.905636866Z" level=info msg="Start event monitor" Jan 30 13:05:47.905683 containerd[1468]: time="2025-01-30T13:05:47.905654306Z" level=info msg="Start snapshots syncer" Jan 30 13:05:47.905683 containerd[1468]: time="2025-01-30T13:05:47.905665346Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:05:47.905683 containerd[1468]: time="2025-01-30T13:05:47.905674626Z" level=info msg="Start streaming server" Jan 30 13:05:47.906017 containerd[1468]: time="2025-01-30T13:05:47.905993706Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:05:47.906075 containerd[1468]: time="2025-01-30T13:05:47.906060546Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:05:47.906258 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:05:47.908433 containerd[1468]: time="2025-01-30T13:05:47.908393426Z" level=info msg="containerd successfully booted in 0.046124s" Jan 30 13:05:47.929230 tar[1463]: linux-arm64/LICENSE Jan 30 13:05:47.929790 tar[1463]: linux-arm64/README.md Jan 30 13:05:47.946557 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:05:48.555117 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:05:48.580457 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:05:48.596160 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:05:48.604218 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:05:48.604477 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:05:48.608436 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:05:48.624843 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:05:48.643344 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:05:48.649218 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 13:05:48.651191 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:05:48.793973 systemd-networkd[1389]: eth0: Gained IPv6LL Jan 30 13:05:48.796954 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:05:48.799497 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:05:48.818135 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:05:48.821879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:05:48.824388 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:05:48.846374 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:05:48.846717 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:05:48.849318 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:05:48.860475 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:05:49.446398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:05:49.448150 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:05:49.449507 systemd[1]: Startup finished in 704ms (kernel) + 4.795s (initrd) + 4.466s (userspace) = 9.966s. Jan 30 13:05:49.449888 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:05:49.462079 agetty[1534]: failed to open credentials directory Jan 30 13:05:49.462587 agetty[1532]: failed to open credentials directory Jan 30 13:05:49.895614 kubelet[1557]: E0130 13:05:49.895551 1557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:05:49.897947 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:05:49.898091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:05:53.372474 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:05:53.373673 systemd[1]: Started sshd@0-10.0.0.110:22-10.0.0.1:54676.service - OpenSSH per-connection server daemon (10.0.0.1:54676). Jan 30 13:05:53.439816 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 54676 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:05:53.444470 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:05:53.452017 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:05:53.471066 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:05:53.472830 systemd-logind[1453]: New session 1 of user core. Jan 30 13:05:53.480737 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:05:53.490059 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:05:53.492607 (systemd)[1574]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:05:53.563635 systemd[1574]: Queued start job for default target default.target. Jan 30 13:05:53.575921 systemd[1574]: Created slice app.slice - User Application Slice. Jan 30 13:05:53.575970 systemd[1574]: Reached target paths.target - Paths. Jan 30 13:05:53.575982 systemd[1574]: Reached target timers.target - Timers. Jan 30 13:05:53.577270 systemd[1574]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:05:53.587596 systemd[1574]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:05:53.587664 systemd[1574]: Reached target sockets.target - Sockets. Jan 30 13:05:53.587676 systemd[1574]: Reached target basic.target - Basic System. Jan 30 13:05:53.587711 systemd[1574]: Reached target default.target - Main User Target. Jan 30 13:05:53.587738 systemd[1574]: Startup finished in 89ms. Jan 30 13:05:53.587995 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:05:53.589360 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:05:53.648689 systemd[1]: Started sshd@1-10.0.0.110:22-10.0.0.1:54692.service - OpenSSH per-connection server daemon (10.0.0.1:54692). Jan 30 13:05:53.691489 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 54692 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:05:53.692821 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:05:53.696857 systemd-logind[1453]: New session 2 of user core. Jan 30 13:05:53.704973 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:05:53.756858 sshd[1587]: Connection closed by 10.0.0.1 port 54692 Jan 30 13:05:53.757198 sshd-session[1585]: pam_unix(sshd:session): session closed for user core Jan 30 13:05:53.771638 systemd[1]: sshd@1-10.0.0.110:22-10.0.0.1:54692.service: Deactivated successfully. Jan 30 13:05:53.773385 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:05:53.774936 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:05:53.786080 systemd[1]: Started sshd@2-10.0.0.110:22-10.0.0.1:54706.service - OpenSSH per-connection server daemon (10.0.0.1:54706). Jan 30 13:05:53.786954 systemd-logind[1453]: Removed session 2. Jan 30 13:05:53.823796 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 54706 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:05:53.825037 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:05:53.829413 systemd-logind[1453]: New session 3 of user core. Jan 30 13:05:53.838955 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:05:53.886627 sshd[1594]: Connection closed by 10.0.0.1 port 54706 Jan 30 13:05:53.887126 sshd-session[1592]: pam_unix(sshd:session): session closed for user core Jan 30 13:05:53.901226 systemd[1]: sshd@2-10.0.0.110:22-10.0.0.1:54706.service: Deactivated successfully. Jan 30 13:05:53.902619 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:05:53.903647 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:05:53.904789 systemd[1]: Started sshd@3-10.0.0.110:22-10.0.0.1:54712.service - OpenSSH per-connection server daemon (10.0.0.1:54712). Jan 30 13:05:53.905512 systemd-logind[1453]: Removed session 3. Jan 30 13:05:53.945967 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 54712 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:05:53.947332 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:05:53.951667 systemd-logind[1453]: New session 4 of user core. Jan 30 13:05:53.963212 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:05:54.019847 sshd[1601]: Connection closed by 10.0.0.1 port 54712 Jan 30 13:05:54.021601 sshd-session[1599]: pam_unix(sshd:session): session closed for user core Jan 30 13:05:54.036131 systemd[1]: sshd@3-10.0.0.110:22-10.0.0.1:54712.service: Deactivated successfully. Jan 30 13:05:54.039802 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:05:54.041497 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:05:54.051550 systemd[1]: Started sshd@4-10.0.0.110:22-10.0.0.1:54718.service - OpenSSH per-connection server daemon (10.0.0.1:54718). Jan 30 13:05:54.052635 systemd-logind[1453]: Removed session 4. Jan 30 13:05:54.096083 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 54718 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:05:54.101456 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:05:54.107617 systemd-logind[1453]: New session 5 of user core. Jan 30 13:05:54.116973 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:05:54.186555 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:05:54.186872 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:05:54.632086 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:05:54.632258 (dockerd)[1630]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:05:54.972798 dockerd[1630]: time="2025-01-30T13:05:54.972659866Z" level=info msg="Starting up" Jan 30 13:05:55.160553 dockerd[1630]: time="2025-01-30T13:05:55.160506626Z" level=info msg="Loading containers: start." Jan 30 13:05:55.332796 kernel: Initializing XFRM netlink socket Jan 30 13:05:55.404754 systemd-networkd[1389]: docker0: Link UP Jan 30 13:05:55.438245 dockerd[1630]: time="2025-01-30T13:05:55.438146866Z" level=info msg="Loading containers: done." Jan 30 13:05:55.453632 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1933050878-merged.mount: Deactivated successfully. Jan 30 13:05:55.457545 dockerd[1630]: time="2025-01-30T13:05:55.457491386Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:05:55.457652 dockerd[1630]: time="2025-01-30T13:05:55.457596066Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 13:05:55.457807 dockerd[1630]: time="2025-01-30T13:05:55.457786826Z" level=info msg="Daemon has completed initialization" Jan 30 13:05:55.511463 dockerd[1630]: time="2025-01-30T13:05:55.511402946Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:05:55.511637 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:05:56.153008 containerd[1468]: time="2025-01-30T13:05:56.152967666Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 13:05:56.794378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60448094.mount: Deactivated successfully. Jan 30 13:05:57.578579 containerd[1468]: time="2025-01-30T13:05:57.578514306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:57.579005 containerd[1468]: time="2025-01-30T13:05:57.578967426Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618072" Jan 30 13:05:57.579898 containerd[1468]: time="2025-01-30T13:05:57.579847266Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:57.582889 containerd[1468]: time="2025-01-30T13:05:57.582849106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:57.585911 containerd[1468]: time="2025-01-30T13:05:57.584964426Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 1.43195316s" Jan 30 13:05:57.585911 containerd[1468]: time="2025-01-30T13:05:57.585013106Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 30 13:05:57.586431 containerd[1468]: time="2025-01-30T13:05:57.586405586Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 13:05:58.653891 containerd[1468]: time="2025-01-30T13:05:58.653838386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:58.654811 containerd[1468]: time="2025-01-30T13:05:58.654639506Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469469" Jan 30 13:05:58.655500 containerd[1468]: time="2025-01-30T13:05:58.655448626Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:58.658892 containerd[1468]: time="2025-01-30T13:05:58.658845906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:58.660833 containerd[1468]: time="2025-01-30T13:05:58.660642386Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 1.0741888s" Jan 30 13:05:58.660833 containerd[1468]: time="2025-01-30T13:05:58.660682066Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 30 13:05:58.661322 containerd[1468]: time="2025-01-30T13:05:58.661302546Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 13:05:59.740822 containerd[1468]: time="2025-01-30T13:05:59.740659026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:59.741179 containerd[1468]: time="2025-01-30T13:05:59.741096066Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024219" Jan 30 13:05:59.741891 containerd[1468]: time="2025-01-30T13:05:59.741840626Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:59.745029 containerd[1468]: time="2025-01-30T13:05:59.744964186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:59.746264 containerd[1468]: time="2025-01-30T13:05:59.746207946Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.0848748s" Jan 30 13:05:59.746264 containerd[1468]: time="2025-01-30T13:05:59.746244626Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 30 13:05:59.746952 containerd[1468]: time="2025-01-30T13:05:59.746926426Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 13:05:59.986278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:05:59.996040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:06:00.102180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:06:00.106683 (kubelet)[1899]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:06:00.238915 kubelet[1899]: E0130 13:06:00.238860 1899 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:06:00.242426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:06:00.242561 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:06:00.883181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666816435.mount: Deactivated successfully. Jan 30 13:06:01.203025 containerd[1468]: time="2025-01-30T13:06:01.202884826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:01.204510 containerd[1468]: time="2025-01-30T13:06:01.204402146Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772119" Jan 30 13:06:01.205800 containerd[1468]: time="2025-01-30T13:06:01.205744426Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:01.209432 containerd[1468]: time="2025-01-30T13:06:01.207580906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:01.209432 containerd[1468]: time="2025-01-30T13:06:01.208273946Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.46131376s" Jan 30 13:06:01.209432 containerd[1468]: time="2025-01-30T13:06:01.208310026Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 30 13:06:01.209432 containerd[1468]: time="2025-01-30T13:06:01.208832826Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:06:01.953956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2705090685.mount: Deactivated successfully. Jan 30 13:06:02.534957 containerd[1468]: time="2025-01-30T13:06:02.534297466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:02.535676 containerd[1468]: time="2025-01-30T13:06:02.535606266Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 30 13:06:02.537015 containerd[1468]: time="2025-01-30T13:06:02.536975306Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:02.539974 containerd[1468]: time="2025-01-30T13:06:02.539435266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:02.541535 containerd[1468]: time="2025-01-30T13:06:02.541496706Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.33263444s" Jan 30 13:06:02.541535 containerd[1468]: time="2025-01-30T13:06:02.541537546Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 13:06:02.542149 containerd[1468]: time="2025-01-30T13:06:02.541944266Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:06:03.008740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2040297563.mount: Deactivated successfully. Jan 30 13:06:03.014896 containerd[1468]: time="2025-01-30T13:06:03.014100786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:03.014896 containerd[1468]: time="2025-01-30T13:06:03.014854586Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 30 13:06:03.015523 containerd[1468]: time="2025-01-30T13:06:03.015500426Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:03.017627 containerd[1468]: time="2025-01-30T13:06:03.017599426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:03.018336 containerd[1468]: time="2025-01-30T13:06:03.018307106Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 476.3216ms" Jan 30 13:06:03.018401 containerd[1468]: time="2025-01-30T13:06:03.018339386Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 30 13:06:03.018941 containerd[1468]: time="2025-01-30T13:06:03.018916226Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 13:06:03.581721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1638625610.mount: Deactivated successfully. Jan 30 13:06:04.933524 containerd[1468]: time="2025-01-30T13:06:04.933461946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:04.935012 containerd[1468]: time="2025-01-30T13:06:04.934957066Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Jan 30 13:06:04.936434 containerd[1468]: time="2025-01-30T13:06:04.936392626Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:04.939314 containerd[1468]: time="2025-01-30T13:06:04.939246546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:04.940958 containerd[1468]: time="2025-01-30T13:06:04.940683386Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.9217352s" Jan 30 13:06:04.940958 containerd[1468]: time="2025-01-30T13:06:04.940732666Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 30 13:06:10.486455 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:06:10.496053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:06:10.723266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:06:10.728996 (kubelet)[2049]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:06:10.769230 kubelet[2049]: E0130 13:06:10.769089 2049 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:06:10.772589 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:06:10.772890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:06:11.903866 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:06:11.924083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:06:11.948159 systemd[1]: Reloading requested from client PID 2066 ('systemctl') (unit session-5.scope)... Jan 30 13:06:11.948179 systemd[1]: Reloading... Jan 30 13:06:12.013875 zram_generator::config[2104]: No configuration found. Jan 30 13:06:12.208086 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:06:12.262588 systemd[1]: Reloading finished in 314 ms. Jan 30 13:06:12.305977 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:06:12.306050 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:06:12.306303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:06:12.310116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:06:12.415865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:06:12.421629 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:06:12.469920 kubelet[2151]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:06:12.469920 kubelet[2151]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:06:12.469920 kubelet[2151]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:06:12.470594 kubelet[2151]: I0130 13:06:12.470523 2151 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:06:13.326699 kubelet[2151]: I0130 13:06:13.326637 2151 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:06:13.326699 kubelet[2151]: I0130 13:06:13.326670 2151 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:06:13.326939 kubelet[2151]: I0130 13:06:13.326913 2151 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:06:13.376605 kubelet[2151]: E0130 13:06:13.376552 2151 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:06:13.378518 kubelet[2151]: I0130 13:06:13.378427 2151 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:06:13.393604 kubelet[2151]: E0130 13:06:13.393547 2151 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:06:13.393604 kubelet[2151]: I0130 13:06:13.393601 2151 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:06:13.397078 kubelet[2151]: I0130 13:06:13.397051 2151 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:06:13.397317 kubelet[2151]: I0130 13:06:13.397306 2151 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:06:13.397434 kubelet[2151]: I0130 13:06:13.397410 2151 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:06:13.397622 kubelet[2151]: I0130 13:06:13.397434 2151 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:06:13.397793 kubelet[2151]: I0130 13:06:13.397764 2151 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:06:13.397882 kubelet[2151]: I0130 13:06:13.397793 2151 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:06:13.398080 kubelet[2151]: I0130 13:06:13.398057 2151 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:06:13.400892 kubelet[2151]: I0130 13:06:13.400572 2151 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:06:13.400892 kubelet[2151]: I0130 13:06:13.400605 2151 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:06:13.400892 kubelet[2151]: I0130 13:06:13.400870 2151 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:06:13.400892 kubelet[2151]: I0130 13:06:13.400882 2151 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:06:13.404672 kubelet[2151]: W0130 13:06:13.404620 2151 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Jan 30 13:06:13.404838 kubelet[2151]: E0130 13:06:13.404816 2151 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:06:13.404984 kubelet[2151]: I0130 13:06:13.404965 2151 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:06:13.405226 kubelet[2151]: W0130 13:06:13.405028 2151 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Jan 30 13:06:13.405226 kubelet[2151]: E0130 13:06:13.405124 2151 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:06:13.407805 kubelet[2151]: I0130 13:06:13.407748 2151 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:06:13.408482 kubelet[2151]: W0130 13:06:13.408450 2151 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:06:13.409580 kubelet[2151]: I0130 13:06:13.409238 2151 server.go:1269] "Started kubelet" Jan 30 13:06:13.409839 kubelet[2151]: I0130 13:06:13.409786 2151 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:06:13.417342 kubelet[2151]: I0130 13:06:13.410693 2151 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:06:13.417342 kubelet[2151]: I0130 13:06:13.411016 2151 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:06:13.417342 kubelet[2151]: I0130 13:06:13.412013 2151 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:06:13.417342 kubelet[2151]: I0130 13:06:13.414130 2151 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:06:13.417342 kubelet[2151]: I0130 13:06:13.414502 2151 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:06:13.419251 kubelet[2151]: I0130 13:06:13.417842 2151 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:06:13.419251 kubelet[2151]: I0130 13:06:13.417970 2151 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:06:13.419251 kubelet[2151]: I0130 13:06:13.418037 2151 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:06:13.419251 kubelet[2151]: W0130 13:06:13.418393 2151 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Jan 30 13:06:13.419251 kubelet[2151]: E0130 13:06:13.418439 2151 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:06:13.419251 kubelet[2151]: I0130 13:06:13.418483 2151 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:06:13.419251 kubelet[2151]: I0130 13:06:13.418799 2151 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:06:13.419251 kubelet[2151]: E0130 13:06:13.418980 2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:06:13.419251 kubelet[2151]: E0130 13:06:13.419046 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="200ms" Jan 30 13:06:13.419251 kubelet[2151]: E0130 13:06:13.419138 2151 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:06:13.420910 kubelet[2151]: I0130 13:06:13.420890 2151 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:06:13.426741 kubelet[2151]: E0130 13:06:13.419574 2151 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.110:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7a3aa096745a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:06:13.409207386 +0000 UTC m=+0.984043761,LastTimestamp:2025-01-30 13:06:13.409207386 +0000 UTC m=+0.984043761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:06:13.430950 kubelet[2151]: I0130 13:06:13.430900 2151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:06:13.431969 kubelet[2151]: I0130 13:06:13.431943 2151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:06:13.431969 kubelet[2151]: I0130 13:06:13.431967 2151 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:06:13.432047 kubelet[2151]: I0130 13:06:13.431987 2151 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:06:13.432047 kubelet[2151]: E0130 13:06:13.432030 2151 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:06:13.436965 kubelet[2151]: W0130 13:06:13.436937 2151 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Jan 30 13:06:13.437126 kubelet[2151]: E0130 13:06:13.437103 2151 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:06:13.437862 kubelet[2151]: I0130 13:06:13.437845 2151 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:06:13.437942 kubelet[2151]: I0130 13:06:13.437932 2151 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:06:13.437992 kubelet[2151]: I0130 13:06:13.437984 2151 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:06:13.498997 kubelet[2151]: I0130 13:06:13.498952 2151 policy_none.go:49] "None policy: Start" Jan 30 13:06:13.500367 kubelet[2151]: I0130 13:06:13.500340 2151 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:06:13.500367 kubelet[2151]: I0130 13:06:13.500370 2151 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:06:13.509797 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:06:13.519187 kubelet[2151]: E0130 13:06:13.519148 2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:06:13.520916 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:06:13.524226 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:06:13.532689 kubelet[2151]: E0130 13:06:13.532654 2151 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:06:13.537725 kubelet[2151]: I0130 13:06:13.537699 2151 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:06:13.537960 kubelet[2151]: I0130 13:06:13.537920 2151 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:06:13.537960 kubelet[2151]: I0130 13:06:13.537938 2151 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:06:13.538251 kubelet[2151]: I0130 13:06:13.538216 2151 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:06:13.540429 kubelet[2151]: E0130 13:06:13.540376 2151 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:06:13.620483 kubelet[2151]: E0130 13:06:13.620354 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="400ms" Jan 30 13:06:13.639650 kubelet[2151]: I0130 13:06:13.639600 2151 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:06:13.640268 kubelet[2151]: E0130 13:06:13.640237 2151 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Jan 30 13:06:13.742592 systemd[1]: Created slice kubepods-burstable-pod4ae81c08ab06ba6f6b3d48ff94c164fb.slice - libcontainer container kubepods-burstable-pod4ae81c08ab06ba6f6b3d48ff94c164fb.slice. Jan 30 13:06:13.756190 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 30 13:06:13.771358 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 30 13:06:13.820040 kubelet[2151]: I0130 13:06:13.819983 2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:06:13.820040 kubelet[2151]: I0130 13:06:13.820028 2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:06:13.822611 kubelet[2151]: I0130 13:06:13.820049 2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ae81c08ab06ba6f6b3d48ff94c164fb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4ae81c08ab06ba6f6b3d48ff94c164fb\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:06:13.822611 kubelet[2151]: I0130 13:06:13.822515 2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ae81c08ab06ba6f6b3d48ff94c164fb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4ae81c08ab06ba6f6b3d48ff94c164fb\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:06:13.822611 kubelet[2151]: I0130 13:06:13.822545 2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:06:13.822611 kubelet[2151]: I0130 13:06:13.822565 2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:06:13.822611 kubelet[2151]: I0130 13:06:13.822586 2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:06:13.823177 kubelet[2151]: I0130 13:06:13.823129 2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:06:13.823714 kubelet[2151]: I0130 13:06:13.823180 2151 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ae81c08ab06ba6f6b3d48ff94c164fb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4ae81c08ab06ba6f6b3d48ff94c164fb\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:06:13.842243 kubelet[2151]: I0130 13:06:13.842216 2151 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:06:13.842678 kubelet[2151]: E0130 13:06:13.842648 2151 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Jan 30 13:06:14.021336 kubelet[2151]: E0130 13:06:14.021296 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="800ms" Jan 30 13:06:14.054718 kubelet[2151]: E0130 13:06:14.054688 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:14.055626 containerd[1468]: time="2025-01-30T13:06:14.055578586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4ae81c08ab06ba6f6b3d48ff94c164fb,Namespace:kube-system,Attempt:0,}" Jan 30 13:06:14.069651 kubelet[2151]: E0130 13:06:14.069611 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:14.070207 containerd[1468]: time="2025-01-30T13:06:14.070160786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 30 13:06:14.074039 kubelet[2151]: E0130 13:06:14.074013 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:14.074615 containerd[1468]: time="2025-01-30T13:06:14.074581626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 30 13:06:14.244676 kubelet[2151]: I0130 13:06:14.244421 2151 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:06:14.244808 kubelet[2151]: E0130 13:06:14.244733 2151 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Jan 30 13:06:14.379259 kubelet[2151]: W0130 13:06:14.379051 2151 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Jan 30 13:06:14.379259 kubelet[2151]: E0130 13:06:14.379136 2151 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:06:14.506216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount541587249.mount: Deactivated successfully. Jan 30 13:06:14.513305 containerd[1468]: time="2025-01-30T13:06:14.513240746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:06:14.515892 containerd[1468]: time="2025-01-30T13:06:14.515762306Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 30 13:06:14.516612 containerd[1468]: time="2025-01-30T13:06:14.516562106Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:06:14.518301 containerd[1468]: time="2025-01-30T13:06:14.518054306Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:06:14.520610 containerd[1468]: time="2025-01-30T13:06:14.520550106Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:06:14.522301 containerd[1468]: time="2025-01-30T13:06:14.521447146Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:06:14.522301 containerd[1468]: time="2025-01-30T13:06:14.522084226Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:06:14.525047 containerd[1468]: time="2025-01-30T13:06:14.524990386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:06:14.526094 containerd[1468]: time="2025-01-30T13:06:14.526051546Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 470.3876ms" Jan 30 13:06:14.527726 containerd[1468]: time="2025-01-30T13:06:14.527578386Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 457.33504ms" Jan 30 13:06:14.531030 containerd[1468]: time="2025-01-30T13:06:14.530994426Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 456.33812ms" Jan 30 13:06:14.564566 kubelet[2151]: W0130 13:06:14.564498 2151 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Jan 30 13:06:14.564566 kubelet[2151]: E0130 13:06:14.564571 2151 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:06:14.711405 kubelet[2151]: W0130 13:06:14.711209 2151 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Jan 30 13:06:14.711405 kubelet[2151]: E0130 13:06:14.711284 2151 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:06:14.722245 kubelet[2151]: W0130 13:06:14.722135 2151 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.110:6443: connect: connection refused Jan 30 13:06:14.722245 kubelet[2151]: E0130 13:06:14.722217 2151 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.110:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:06:14.741272 containerd[1468]: time="2025-01-30T13:06:14.740673906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:06:14.741272 containerd[1468]: time="2025-01-30T13:06:14.740749546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:06:14.744733 containerd[1468]: time="2025-01-30T13:06:14.740881786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:06:14.744733 containerd[1468]: time="2025-01-30T13:06:14.740944026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:06:14.744733 containerd[1468]: time="2025-01-30T13:06:14.740956786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:14.744733 containerd[1468]: time="2025-01-30T13:06:14.741625786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:14.744733 containerd[1468]: time="2025-01-30T13:06:14.744050226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:14.745991 containerd[1468]: time="2025-01-30T13:06:14.745899826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:06:14.746113 containerd[1468]: time="2025-01-30T13:06:14.746061746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:06:14.746249 containerd[1468]: time="2025-01-30T13:06:14.746100946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:14.746491 containerd[1468]: time="2025-01-30T13:06:14.746454106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:14.748456 containerd[1468]: time="2025-01-30T13:06:14.748386626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:14.771960 systemd[1]: Started cri-containerd-2008cf020c9349fc5434884ce238ab345cc73e113335e9bbfdaa669c97d452f3.scope - libcontainer container 2008cf020c9349fc5434884ce238ab345cc73e113335e9bbfdaa669c97d452f3. Jan 30 13:06:14.773665 systemd[1]: Started cri-containerd-674559aeb76a5d51544cb3ab0dad31d1e620986839c8d49c0b0b08bc176695be.scope - libcontainer container 674559aeb76a5d51544cb3ab0dad31d1e620986839c8d49c0b0b08bc176695be. Jan 30 13:06:14.775340 systemd[1]: Started cri-containerd-dbddfe7b9c7bc4285965e708cd5badca6f1149a23c5512e96dceaf64db40a12d.scope - libcontainer container dbddfe7b9c7bc4285965e708cd5badca6f1149a23c5512e96dceaf64db40a12d. Jan 30 13:06:14.810904 containerd[1468]: time="2025-01-30T13:06:14.810227026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4ae81c08ab06ba6f6b3d48ff94c164fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"674559aeb76a5d51544cb3ab0dad31d1e620986839c8d49c0b0b08bc176695be\"" Jan 30 13:06:14.812748 kubelet[2151]: E0130 13:06:14.812725 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:14.814889 containerd[1468]: time="2025-01-30T13:06:14.814853386Z" level=info msg="CreateContainer within sandbox \"674559aeb76a5d51544cb3ab0dad31d1e620986839c8d49c0b0b08bc176695be\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:06:14.821148 containerd[1468]: time="2025-01-30T13:06:14.821106426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"2008cf020c9349fc5434884ce238ab345cc73e113335e9bbfdaa669c97d452f3\"" Jan 30 13:06:14.821890 kubelet[2151]: E0130 13:06:14.821866 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:14.822338 kubelet[2151]: E0130 13:06:14.822312 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.110:6443: connect: connection refused" interval="1.6s" Jan 30 13:06:14.824394 containerd[1468]: time="2025-01-30T13:06:14.824325026Z" level=info msg="CreateContainer within sandbox \"2008cf020c9349fc5434884ce238ab345cc73e113335e9bbfdaa669c97d452f3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:06:14.826875 containerd[1468]: time="2025-01-30T13:06:14.826841426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbddfe7b9c7bc4285965e708cd5badca6f1149a23c5512e96dceaf64db40a12d\"" Jan 30 13:06:14.827410 containerd[1468]: time="2025-01-30T13:06:14.827335386Z" level=info msg="CreateContainer within sandbox \"674559aeb76a5d51544cb3ab0dad31d1e620986839c8d49c0b0b08bc176695be\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"15648014e3b62f208b5923136735ad99dd72d1b0506a6046344a8af95f165307\"" Jan 30 13:06:14.827757 kubelet[2151]: E0130 13:06:14.827660 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:14.828360 containerd[1468]: time="2025-01-30T13:06:14.828253786Z" level=info msg="StartContainer for \"15648014e3b62f208b5923136735ad99dd72d1b0506a6046344a8af95f165307\"" Jan 30 13:06:14.831836 containerd[1468]: time="2025-01-30T13:06:14.829937386Z" level=info msg="CreateContainer within sandbox \"dbddfe7b9c7bc4285965e708cd5badca6f1149a23c5512e96dceaf64db40a12d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:06:14.845460 containerd[1468]: time="2025-01-30T13:06:14.845402826Z" level=info msg="CreateContainer within sandbox \"2008cf020c9349fc5434884ce238ab345cc73e113335e9bbfdaa669c97d452f3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b14113785c0ddd73ce0318fb08465fa867010915d93885bd9c8cb8c691d8cc25\"" Jan 30 13:06:14.845999 containerd[1468]: time="2025-01-30T13:06:14.845968826Z" level=info msg="StartContainer for \"b14113785c0ddd73ce0318fb08465fa867010915d93885bd9c8cb8c691d8cc25\"" Jan 30 13:06:14.857200 systemd[1]: Started cri-containerd-15648014e3b62f208b5923136735ad99dd72d1b0506a6046344a8af95f165307.scope - libcontainer container 15648014e3b62f208b5923136735ad99dd72d1b0506a6046344a8af95f165307. Jan 30 13:06:14.861928 containerd[1468]: time="2025-01-30T13:06:14.861879986Z" level=info msg="CreateContainer within sandbox \"dbddfe7b9c7bc4285965e708cd5badca6f1149a23c5512e96dceaf64db40a12d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"23db3b82452e230477c82459b1bb6b5d0bf87b2d232f344e2b66c5c5ea39f4d9\"" Jan 30 13:06:14.863733 containerd[1468]: time="2025-01-30T13:06:14.862524306Z" level=info msg="StartContainer for \"23db3b82452e230477c82459b1bb6b5d0bf87b2d232f344e2b66c5c5ea39f4d9\"" Jan 30 13:06:14.876412 systemd[1]: Started cri-containerd-b14113785c0ddd73ce0318fb08465fa867010915d93885bd9c8cb8c691d8cc25.scope - libcontainer container b14113785c0ddd73ce0318fb08465fa867010915d93885bd9c8cb8c691d8cc25. Jan 30 13:06:14.890991 systemd[1]: Started cri-containerd-23db3b82452e230477c82459b1bb6b5d0bf87b2d232f344e2b66c5c5ea39f4d9.scope - libcontainer container 23db3b82452e230477c82459b1bb6b5d0bf87b2d232f344e2b66c5c5ea39f4d9. Jan 30 13:06:14.928174 containerd[1468]: time="2025-01-30T13:06:14.928077626Z" level=info msg="StartContainer for \"15648014e3b62f208b5923136735ad99dd72d1b0506a6046344a8af95f165307\" returns successfully" Jan 30 13:06:14.953962 containerd[1468]: time="2025-01-30T13:06:14.953268786Z" level=info msg="StartContainer for \"b14113785c0ddd73ce0318fb08465fa867010915d93885bd9c8cb8c691d8cc25\" returns successfully" Jan 30 13:06:14.987748 containerd[1468]: time="2025-01-30T13:06:14.987703266Z" level=info msg="StartContainer for \"23db3b82452e230477c82459b1bb6b5d0bf87b2d232f344e2b66c5c5ea39f4d9\" returns successfully" Jan 30 13:06:15.058306 kubelet[2151]: I0130 13:06:15.058254 2151 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:06:15.058653 kubelet[2151]: E0130 13:06:15.058610 2151 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.110:6443/api/v1/nodes\": dial tcp 10.0.0.110:6443: connect: connection refused" node="localhost" Jan 30 13:06:15.445947 kubelet[2151]: E0130 13:06:15.445842 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:15.449460 kubelet[2151]: E0130 13:06:15.449427 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:15.449602 kubelet[2151]: E0130 13:06:15.449574 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:16.449991 kubelet[2151]: E0130 13:06:16.449926 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:16.647524 kubelet[2151]: E0130 13:06:16.647475 2151 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:06:16.660236 kubelet[2151]: I0130 13:06:16.659967 2151 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:06:16.811321 kubelet[2151]: I0130 13:06:16.811257 2151 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 30 13:06:16.811321 kubelet[2151]: E0130 13:06:16.811300 2151 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 30 13:06:16.841705 kubelet[2151]: E0130 13:06:16.841596 2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:06:16.942596 kubelet[2151]: E0130 13:06:16.942551 2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:06:17.043351 kubelet[2151]: E0130 13:06:17.043303 2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:06:17.144001 kubelet[2151]: E0130 13:06:17.143887 2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:06:17.244477 kubelet[2151]: E0130 13:06:17.244435 2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:06:17.345005 kubelet[2151]: E0130 13:06:17.344955 2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:06:17.445369 kubelet[2151]: E0130 13:06:17.445268 2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:06:17.451606 kubelet[2151]: E0130 13:06:17.451573 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:17.546100 kubelet[2151]: E0130 13:06:17.546031 2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:06:17.646851 kubelet[2151]: E0130 13:06:17.646816 2151 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:06:18.406535 kubelet[2151]: I0130 13:06:18.406416 2151 apiserver.go:52] "Watching apiserver" Jan 30 13:06:18.419114 kubelet[2151]: I0130 13:06:18.419073 2151 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:06:18.925485 systemd[1]: Reloading requested from client PID 2429 ('systemctl') (unit session-5.scope)... Jan 30 13:06:18.925504 systemd[1]: Reloading... Jan 30 13:06:19.003835 zram_generator::config[2471]: No configuration found. Jan 30 13:06:19.130386 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:06:19.199066 systemd[1]: Reloading finished in 273 ms. Jan 30 13:06:19.238667 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:06:19.250987 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:06:19.251457 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:06:19.251604 systemd[1]: kubelet.service: Consumed 1.390s CPU time, 118.4M memory peak, 0B memory swap peak. Jan 30 13:06:19.266180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:06:19.377293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:06:19.383477 (kubelet)[2510]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:06:19.427915 kubelet[2510]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:06:19.428354 kubelet[2510]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:06:19.428419 kubelet[2510]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:06:19.428553 kubelet[2510]: I0130 13:06:19.428522 2510 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:06:19.434685 kubelet[2510]: I0130 13:06:19.434639 2510 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:06:19.434685 kubelet[2510]: I0130 13:06:19.434672 2510 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:06:19.436356 kubelet[2510]: I0130 13:06:19.435018 2510 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:06:19.437480 kubelet[2510]: I0130 13:06:19.437451 2510 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:06:19.439728 kubelet[2510]: I0130 13:06:19.439699 2510 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:06:19.443282 kubelet[2510]: E0130 13:06:19.443245 2510 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:06:19.443282 kubelet[2510]: I0130 13:06:19.443284 2510 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:06:19.445663 kubelet[2510]: I0130 13:06:19.445640 2510 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:06:19.445802 kubelet[2510]: I0130 13:06:19.445761 2510 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:06:19.445925 kubelet[2510]: I0130 13:06:19.445891 2510 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:06:19.446138 kubelet[2510]: I0130 13:06:19.445918 2510 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:06:19.446219 kubelet[2510]: I0130 13:06:19.446139 2510 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:06:19.446219 kubelet[2510]: I0130 13:06:19.446150 2510 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:06:19.446265 kubelet[2510]: I0130 13:06:19.446229 2510 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:06:19.446355 kubelet[2510]: I0130 13:06:19.446342 2510 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:06:19.446380 kubelet[2510]: I0130 13:06:19.446362 2510 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:06:19.446380 kubelet[2510]: I0130 13:06:19.446380 2510 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:06:19.446440 kubelet[2510]: I0130 13:06:19.446390 2510 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:06:19.447564 kubelet[2510]: I0130 13:06:19.447330 2510 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:06:19.447947 kubelet[2510]: I0130 13:06:19.447928 2510 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:06:19.450216 kubelet[2510]: I0130 13:06:19.449364 2510 server.go:1269] "Started kubelet" Jan 30 13:06:19.450977 kubelet[2510]: I0130 13:06:19.450944 2510 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:06:19.453273 kubelet[2510]: I0130 13:06:19.453223 2510 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:06:19.454079 kubelet[2510]: I0130 13:06:19.454043 2510 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:06:19.454079 kubelet[2510]: I0130 13:06:19.453795 2510 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:06:19.455077 kubelet[2510]: I0130 13:06:19.455046 2510 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:06:19.455677 kubelet[2510]: I0130 13:06:19.455656 2510 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:06:19.456021 kubelet[2510]: I0130 13:06:19.455986 2510 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:06:19.456574 kubelet[2510]: E0130 13:06:19.456546 2510 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:06:19.456708 kubelet[2510]: I0130 13:06:19.456692 2510 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:06:19.456888 kubelet[2510]: I0130 13:06:19.456872 2510 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:06:19.461395 kubelet[2510]: I0130 13:06:19.461374 2510 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:06:19.461852 kubelet[2510]: I0130 13:06:19.461635 2510 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:06:19.472463 kubelet[2510]: I0130 13:06:19.472431 2510 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:06:19.474761 kubelet[2510]: I0130 13:06:19.474697 2510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:06:19.478613 kubelet[2510]: I0130 13:06:19.478449 2510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:06:19.478613 kubelet[2510]: I0130 13:06:19.478488 2510 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:06:19.478613 kubelet[2510]: I0130 13:06:19.478516 2510 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:06:19.478613 kubelet[2510]: E0130 13:06:19.478567 2510 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:06:19.512225 kubelet[2510]: I0130 13:06:19.512192 2510 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:06:19.512225 kubelet[2510]: I0130 13:06:19.512211 2510 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:06:19.512225 kubelet[2510]: I0130 13:06:19.512235 2510 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:06:19.512437 kubelet[2510]: I0130 13:06:19.512419 2510 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:06:19.512479 kubelet[2510]: I0130 13:06:19.512437 2510 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:06:19.512479 kubelet[2510]: I0130 13:06:19.512455 2510 policy_none.go:49] "None policy: Start" Jan 30 13:06:19.513158 kubelet[2510]: I0130 13:06:19.513113 2510 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:06:19.513158 kubelet[2510]: I0130 13:06:19.513144 2510 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:06:19.513350 kubelet[2510]: I0130 13:06:19.513323 2510 state_mem.go:75] "Updated machine memory state" Jan 30 13:06:19.518527 kubelet[2510]: I0130 13:06:19.518498 2510 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:06:19.518846 kubelet[2510]: I0130 13:06:19.518684 2510 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:06:19.518846 kubelet[2510]: I0130 13:06:19.518695 2510 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:06:19.519743 kubelet[2510]: I0130 13:06:19.519484 2510 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:06:19.625820 kubelet[2510]: I0130 13:06:19.625783 2510 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:06:19.635814 kubelet[2510]: I0130 13:06:19.635704 2510 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 30 13:06:19.635814 kubelet[2510]: I0130 13:06:19.635823 2510 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 30 13:06:19.658077 kubelet[2510]: I0130 13:06:19.658028 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:06:19.658077 kubelet[2510]: I0130 13:06:19.658082 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ae81c08ab06ba6f6b3d48ff94c164fb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4ae81c08ab06ba6f6b3d48ff94c164fb\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:06:19.658451 kubelet[2510]: I0130 13:06:19.658110 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:06:19.658451 kubelet[2510]: I0130 13:06:19.658126 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:06:19.658451 kubelet[2510]: I0130 13:06:19.658171 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:06:19.658451 kubelet[2510]: I0130 13:06:19.658201 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:06:19.658451 kubelet[2510]: I0130 13:06:19.658248 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ae81c08ab06ba6f6b3d48ff94c164fb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4ae81c08ab06ba6f6b3d48ff94c164fb\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:06:19.658590 kubelet[2510]: I0130 13:06:19.658266 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ae81c08ab06ba6f6b3d48ff94c164fb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4ae81c08ab06ba6f6b3d48ff94c164fb\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:06:19.658590 kubelet[2510]: I0130 13:06:19.658282 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:06:19.888363 kubelet[2510]: E0130 13:06:19.888327 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:19.888487 kubelet[2510]: E0130 13:06:19.888412 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:19.888487 kubelet[2510]: E0130 13:06:19.888449 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:20.447920 kubelet[2510]: I0130 13:06:20.447874 2510 apiserver.go:52] "Watching apiserver" Jan 30 13:06:20.457500 kubelet[2510]: I0130 13:06:20.457458 2510 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:06:20.497016 kubelet[2510]: E0130 13:06:20.496977 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:20.497016 kubelet[2510]: E0130 13:06:20.496989 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:20.497312 kubelet[2510]: E0130 13:06:20.497125 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:20.555079 kubelet[2510]: I0130 13:06:20.554999 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.554979813 podStartE2EDuration="1.554979813s" podCreationTimestamp="2025-01-30 13:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:06:20.537354938 +0000 UTC m=+1.149698753" watchObservedRunningTime="2025-01-30 13:06:20.554979813 +0000 UTC m=+1.167323628" Jan 30 13:06:20.564678 kubelet[2510]: I0130 13:06:20.564544 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5645281070000001 podStartE2EDuration="1.564528107s" podCreationTimestamp="2025-01-30 13:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:06:20.555149928 +0000 UTC m=+1.167493743" watchObservedRunningTime="2025-01-30 13:06:20.564528107 +0000 UTC m=+1.176871922" Jan 30 13:06:20.566077 kubelet[2510]: I0130 13:06:20.565740 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.565725389 podStartE2EDuration="1.565725389s" podCreationTimestamp="2025-01-30 13:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:06:20.5641334 +0000 UTC m=+1.176477215" watchObservedRunningTime="2025-01-30 13:06:20.565725389 +0000 UTC m=+1.178069244" Jan 30 13:06:20.851315 sudo[1609]: pam_unix(sudo:session): session closed for user root Jan 30 13:06:20.852954 sshd[1608]: Connection closed by 10.0.0.1 port 54718 Jan 30 13:06:20.853298 sshd-session[1606]: pam_unix(sshd:session): session closed for user core Jan 30 13:06:20.856952 systemd[1]: sshd@4-10.0.0.110:22-10.0.0.1:54718.service: Deactivated successfully. Jan 30 13:06:20.859426 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:06:20.859741 systemd[1]: session-5.scope: Consumed 8.095s CPU time, 155.4M memory peak, 0B memory swap peak. Jan 30 13:06:20.860613 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:06:20.861520 systemd-logind[1453]: Removed session 5. Jan 30 13:06:21.499522 kubelet[2510]: E0130 13:06:21.499095 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:21.499522 kubelet[2510]: E0130 13:06:21.499141 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:23.752919 kubelet[2510]: I0130 13:06:23.752873 2510 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:06:23.753332 containerd[1468]: time="2025-01-30T13:06:23.753169331Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:06:23.753542 kubelet[2510]: I0130 13:06:23.753342 2510 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:06:24.533333 kubelet[2510]: E0130 13:06:24.532887 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:24.655778 systemd[1]: Created slice kubepods-besteffort-pod097b59d7_a511_4d25_bd5f_4dec0719db1b.slice - libcontainer container kubepods-besteffort-pod097b59d7_a511_4d25_bd5f_4dec0719db1b.slice. Jan 30 13:06:24.680210 systemd[1]: Created slice kubepods-burstable-podb8c1cf7a_225c_4bdd_9d23_48c2881e64d5.slice - libcontainer container kubepods-burstable-podb8c1cf7a_225c_4bdd_9d23_48c2881e64d5.slice. Jan 30 13:06:24.689757 kubelet[2510]: I0130 13:06:24.689722 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/b8c1cf7a-225c-4bdd-9d23-48c2881e64d5-cni\") pod \"kube-flannel-ds-grwzx\" (UID: \"b8c1cf7a-225c-4bdd-9d23-48c2881e64d5\") " pod="kube-flannel/kube-flannel-ds-grwzx" Jan 30 13:06:24.689907 kubelet[2510]: I0130 13:06:24.689808 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/b8c1cf7a-225c-4bdd-9d23-48c2881e64d5-flannel-cfg\") pod \"kube-flannel-ds-grwzx\" (UID: \"b8c1cf7a-225c-4bdd-9d23-48c2881e64d5\") " pod="kube-flannel/kube-flannel-ds-grwzx" Jan 30 13:06:24.689907 kubelet[2510]: I0130 13:06:24.689848 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlsv5\" (UniqueName: \"kubernetes.io/projected/b8c1cf7a-225c-4bdd-9d23-48c2881e64d5-kube-api-access-zlsv5\") pod \"kube-flannel-ds-grwzx\" (UID: \"b8c1cf7a-225c-4bdd-9d23-48c2881e64d5\") " pod="kube-flannel/kube-flannel-ds-grwzx" Jan 30 13:06:24.689907 kubelet[2510]: I0130 13:06:24.689867 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/097b59d7-a511-4d25-bd5f-4dec0719db1b-kube-proxy\") pod \"kube-proxy-mx55x\" (UID: \"097b59d7-a511-4d25-bd5f-4dec0719db1b\") " pod="kube-system/kube-proxy-mx55x" Jan 30 13:06:24.690007 kubelet[2510]: I0130 13:06:24.689970 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/097b59d7-a511-4d25-bd5f-4dec0719db1b-xtables-lock\") pod \"kube-proxy-mx55x\" (UID: \"097b59d7-a511-4d25-bd5f-4dec0719db1b\") " pod="kube-system/kube-proxy-mx55x" Jan 30 13:06:24.690007 kubelet[2510]: I0130 13:06:24.689994 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/097b59d7-a511-4d25-bd5f-4dec0719db1b-lib-modules\") pod \"kube-proxy-mx55x\" (UID: \"097b59d7-a511-4d25-bd5f-4dec0719db1b\") " pod="kube-system/kube-proxy-mx55x" Jan 30 13:06:24.690051 kubelet[2510]: I0130 13:06:24.690011 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk2rx\" (UniqueName: \"kubernetes.io/projected/097b59d7-a511-4d25-bd5f-4dec0719db1b-kube-api-access-kk2rx\") pod \"kube-proxy-mx55x\" (UID: \"097b59d7-a511-4d25-bd5f-4dec0719db1b\") " pod="kube-system/kube-proxy-mx55x" Jan 30 13:06:24.690051 kubelet[2510]: I0130 13:06:24.690027 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b8c1cf7a-225c-4bdd-9d23-48c2881e64d5-run\") pod \"kube-flannel-ds-grwzx\" (UID: \"b8c1cf7a-225c-4bdd-9d23-48c2881e64d5\") " pod="kube-flannel/kube-flannel-ds-grwzx" Jan 30 13:06:24.690158 kubelet[2510]: I0130 13:06:24.690085 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/b8c1cf7a-225c-4bdd-9d23-48c2881e64d5-cni-plugin\") pod \"kube-flannel-ds-grwzx\" (UID: \"b8c1cf7a-225c-4bdd-9d23-48c2881e64d5\") " pod="kube-flannel/kube-flannel-ds-grwzx" Jan 30 13:06:24.690158 kubelet[2510]: I0130 13:06:24.690144 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8c1cf7a-225c-4bdd-9d23-48c2881e64d5-xtables-lock\") pod \"kube-flannel-ds-grwzx\" (UID: \"b8c1cf7a-225c-4bdd-9d23-48c2881e64d5\") " pod="kube-flannel/kube-flannel-ds-grwzx" Jan 30 13:06:24.975283 kubelet[2510]: E0130 13:06:24.975161 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:24.976227 containerd[1468]: time="2025-01-30T13:06:24.976169141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mx55x,Uid:097b59d7-a511-4d25-bd5f-4dec0719db1b,Namespace:kube-system,Attempt:0,}" Jan 30 13:06:24.983240 kubelet[2510]: E0130 13:06:24.983215 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:24.983603 containerd[1468]: time="2025-01-30T13:06:24.983568397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-grwzx,Uid:b8c1cf7a-225c-4bdd-9d23-48c2881e64d5,Namespace:kube-flannel,Attempt:0,}" Jan 30 13:06:25.007619 containerd[1468]: time="2025-01-30T13:06:25.007516014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:06:25.007841 containerd[1468]: time="2025-01-30T13:06:25.007607252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:06:25.007841 containerd[1468]: time="2025-01-30T13:06:25.007637692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:25.007841 containerd[1468]: time="2025-01-30T13:06:25.007734249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:25.020012 containerd[1468]: time="2025-01-30T13:06:25.019658013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:06:25.020012 containerd[1468]: time="2025-01-30T13:06:25.019717771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:06:25.020012 containerd[1468]: time="2025-01-30T13:06:25.019733331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:25.020971 containerd[1468]: time="2025-01-30T13:06:25.020896144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:25.030070 systemd[1]: Started cri-containerd-d46addd93fcff132d62b741fe226fc9126e7e9a26058aa1455e0a514761212e7.scope - libcontainer container d46addd93fcff132d62b741fe226fc9126e7e9a26058aa1455e0a514761212e7. Jan 30 13:06:25.037254 systemd[1]: Started cri-containerd-490496573267b1b659b8476f5cbb531d77fdf73c89a0e0913c10a95626813835.scope - libcontainer container 490496573267b1b659b8476f5cbb531d77fdf73c89a0e0913c10a95626813835. Jan 30 13:06:25.066301 containerd[1468]: time="2025-01-30T13:06:25.066261850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mx55x,Uid:097b59d7-a511-4d25-bd5f-4dec0719db1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"490496573267b1b659b8476f5cbb531d77fdf73c89a0e0913c10a95626813835\"" Jan 30 13:06:25.066301 containerd[1468]: time="2025-01-30T13:06:25.066268970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-grwzx,Uid:b8c1cf7a-225c-4bdd-9d23-48c2881e64d5,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"d46addd93fcff132d62b741fe226fc9126e7e9a26058aa1455e0a514761212e7\"" Jan 30 13:06:25.067477 kubelet[2510]: E0130 13:06:25.067261 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:25.067477 kubelet[2510]: E0130 13:06:25.067390 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:25.069118 containerd[1468]: time="2025-01-30T13:06:25.069076345Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 30 13:06:25.069837 containerd[1468]: time="2025-01-30T13:06:25.069749369Z" level=info msg="CreateContainer within sandbox \"490496573267b1b659b8476f5cbb531d77fdf73c89a0e0913c10a95626813835\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:06:25.089624 containerd[1468]: time="2025-01-30T13:06:25.089516350Z" level=info msg="CreateContainer within sandbox \"490496573267b1b659b8476f5cbb531d77fdf73c89a0e0913c10a95626813835\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b6eb11379d0c6ec906388c5a6deea41eff529923aa04a456079b47b5eae7f49d\"" Jan 30 13:06:25.090303 containerd[1468]: time="2025-01-30T13:06:25.090276053Z" level=info msg="StartContainer for \"b6eb11379d0c6ec906388c5a6deea41eff529923aa04a456079b47b5eae7f49d\"" Jan 30 13:06:25.117966 systemd[1]: Started cri-containerd-b6eb11379d0c6ec906388c5a6deea41eff529923aa04a456079b47b5eae7f49d.scope - libcontainer container b6eb11379d0c6ec906388c5a6deea41eff529923aa04a456079b47b5eae7f49d. Jan 30 13:06:25.145149 containerd[1468]: time="2025-01-30T13:06:25.145101699Z" level=info msg="StartContainer for \"b6eb11379d0c6ec906388c5a6deea41eff529923aa04a456079b47b5eae7f49d\" returns successfully" Jan 30 13:06:25.506828 kubelet[2510]: E0130 13:06:25.506457 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:25.508172 kubelet[2510]: E0130 13:06:25.508151 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:25.527923 kubelet[2510]: I0130 13:06:25.527856 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mx55x" podStartSLOduration=1.5278386510000002 podStartE2EDuration="1.527838651s" podCreationTimestamp="2025-01-30 13:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:06:25.51924569 +0000 UTC m=+6.131589505" watchObservedRunningTime="2025-01-30 13:06:25.527838651 +0000 UTC m=+6.140182466" Jan 30 13:06:26.132840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3347063486.mount: Deactivated successfully. Jan 30 13:06:26.167813 containerd[1468]: time="2025-01-30T13:06:26.167748432Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:26.168326 containerd[1468]: time="2025-01-30T13:06:26.168288780Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Jan 30 13:06:26.169799 containerd[1468]: time="2025-01-30T13:06:26.169459235Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:26.172003 containerd[1468]: time="2025-01-30T13:06:26.171966420Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:26.172920 containerd[1468]: time="2025-01-30T13:06:26.172830041Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.10359454s" Jan 30 13:06:26.172920 containerd[1468]: time="2025-01-30T13:06:26.172864720Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Jan 30 13:06:26.175410 containerd[1468]: time="2025-01-30T13:06:26.175121031Z" level=info msg="CreateContainer within sandbox \"d46addd93fcff132d62b741fe226fc9126e7e9a26058aa1455e0a514761212e7\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 30 13:06:26.187287 containerd[1468]: time="2025-01-30T13:06:26.187235208Z" level=info msg="CreateContainer within sandbox \"d46addd93fcff132d62b741fe226fc9126e7e9a26058aa1455e0a514761212e7\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"8fdbb985fa19b4ca82ef83a9123611062e9e7bbda6926e9a310d46a09ac5aab8\"" Jan 30 13:06:26.187990 containerd[1468]: time="2025-01-30T13:06:26.187919633Z" level=info msg="StartContainer for \"8fdbb985fa19b4ca82ef83a9123611062e9e7bbda6926e9a310d46a09ac5aab8\"" Jan 30 13:06:26.214969 systemd[1]: Started cri-containerd-8fdbb985fa19b4ca82ef83a9123611062e9e7bbda6926e9a310d46a09ac5aab8.scope - libcontainer container 8fdbb985fa19b4ca82ef83a9123611062e9e7bbda6926e9a310d46a09ac5aab8. Jan 30 13:06:26.241559 containerd[1468]: time="2025-01-30T13:06:26.241489626Z" level=info msg="StartContainer for \"8fdbb985fa19b4ca82ef83a9123611062e9e7bbda6926e9a310d46a09ac5aab8\" returns successfully" Jan 30 13:06:26.247847 systemd[1]: cri-containerd-8fdbb985fa19b4ca82ef83a9123611062e9e7bbda6926e9a310d46a09ac5aab8.scope: Deactivated successfully. Jan 30 13:06:26.284517 containerd[1468]: time="2025-01-30T13:06:26.284448971Z" level=info msg="shim disconnected" id=8fdbb985fa19b4ca82ef83a9123611062e9e7bbda6926e9a310d46a09ac5aab8 namespace=k8s.io Jan 30 13:06:26.284517 containerd[1468]: time="2025-01-30T13:06:26.284501130Z" level=warning msg="cleaning up after shim disconnected" id=8fdbb985fa19b4ca82ef83a9123611062e9e7bbda6926e9a310d46a09ac5aab8 namespace=k8s.io Jan 30 13:06:26.284517 containerd[1468]: time="2025-01-30T13:06:26.284509530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:06:26.511279 kubelet[2510]: E0130 13:06:26.511241 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:26.517790 containerd[1468]: time="2025-01-30T13:06:26.514639999Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 30 13:06:27.578354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1385444171.mount: Deactivated successfully. Jan 30 13:06:27.594782 kubelet[2510]: E0130 13:06:27.590491 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:28.119743 containerd[1468]: time="2025-01-30T13:06:28.119692968Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:28.120835 containerd[1468]: time="2025-01-30T13:06:28.120579551Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Jan 30 13:06:28.122068 containerd[1468]: time="2025-01-30T13:06:28.122025683Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:28.125679 containerd[1468]: time="2025-01-30T13:06:28.125624174Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:28.126933 containerd[1468]: time="2025-01-30T13:06:28.126824071Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.612141832s" Jan 30 13:06:28.126933 containerd[1468]: time="2025-01-30T13:06:28.126895590Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Jan 30 13:06:28.130098 containerd[1468]: time="2025-01-30T13:06:28.129973371Z" level=info msg="CreateContainer within sandbox \"d46addd93fcff132d62b741fe226fc9126e7e9a26058aa1455e0a514761212e7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:06:28.139799 containerd[1468]: time="2025-01-30T13:06:28.139616026Z" level=info msg="CreateContainer within sandbox \"d46addd93fcff132d62b741fe226fc9126e7e9a26058aa1455e0a514761212e7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b0c778359d5a67588f473b24f5d788ebcbe007d3066ad939686f657f030abe1f\"" Jan 30 13:06:28.140286 containerd[1468]: time="2025-01-30T13:06:28.140261414Z" level=info msg="StartContainer for \"b0c778359d5a67588f473b24f5d788ebcbe007d3066ad939686f657f030abe1f\"" Jan 30 13:06:28.169993 systemd[1]: Started cri-containerd-b0c778359d5a67588f473b24f5d788ebcbe007d3066ad939686f657f030abe1f.scope - libcontainer container b0c778359d5a67588f473b24f5d788ebcbe007d3066ad939686f657f030abe1f. Jan 30 13:06:28.197571 systemd[1]: cri-containerd-b0c778359d5a67588f473b24f5d788ebcbe007d3066ad939686f657f030abe1f.scope: Deactivated successfully. Jan 30 13:06:28.213068 containerd[1468]: time="2025-01-30T13:06:28.211984962Z" level=info msg="StartContainer for \"b0c778359d5a67588f473b24f5d788ebcbe007d3066ad939686f657f030abe1f\" returns successfully" Jan 30 13:06:28.229818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0c778359d5a67588f473b24f5d788ebcbe007d3066ad939686f657f030abe1f-rootfs.mount: Deactivated successfully. Jan 30 13:06:28.258561 kubelet[2510]: I0130 13:06:28.258529 2510 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 13:06:28.317688 kubelet[2510]: I0130 13:06:28.317631 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d120a47-54d3-412c-8002-8bf53a9bc92f-config-volume\") pod \"coredns-6f6b679f8f-gw48z\" (UID: \"1d120a47-54d3-412c-8002-8bf53a9bc92f\") " pod="kube-system/coredns-6f6b679f8f-gw48z" Jan 30 13:06:28.317688 kubelet[2510]: I0130 13:06:28.317685 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm7k9\" (UniqueName: \"kubernetes.io/projected/73797064-ee85-42c8-a318-e23cb83ea869-kube-api-access-wm7k9\") pod \"coredns-6f6b679f8f-9mcsk\" (UID: \"73797064-ee85-42c8-a318-e23cb83ea869\") " pod="kube-system/coredns-6f6b679f8f-9mcsk" Jan 30 13:06:28.317903 kubelet[2510]: I0130 13:06:28.317707 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73797064-ee85-42c8-a318-e23cb83ea869-config-volume\") pod \"coredns-6f6b679f8f-9mcsk\" (UID: \"73797064-ee85-42c8-a318-e23cb83ea869\") " pod="kube-system/coredns-6f6b679f8f-9mcsk" Jan 30 13:06:28.317903 kubelet[2510]: I0130 13:06:28.317735 2510 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrkgt\" (UniqueName: \"kubernetes.io/projected/1d120a47-54d3-412c-8002-8bf53a9bc92f-kube-api-access-qrkgt\") pod \"coredns-6f6b679f8f-gw48z\" (UID: \"1d120a47-54d3-412c-8002-8bf53a9bc92f\") " pod="kube-system/coredns-6f6b679f8f-gw48z" Jan 30 13:06:28.323849 containerd[1468]: time="2025-01-30T13:06:28.323184514Z" level=info msg="shim disconnected" id=b0c778359d5a67588f473b24f5d788ebcbe007d3066ad939686f657f030abe1f namespace=k8s.io Jan 30 13:06:28.323849 containerd[1468]: time="2025-01-30T13:06:28.323239953Z" level=warning msg="cleaning up after shim disconnected" id=b0c778359d5a67588f473b24f5d788ebcbe007d3066ad939686f657f030abe1f namespace=k8s.io Jan 30 13:06:28.323849 containerd[1468]: time="2025-01-30T13:06:28.323248072Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:06:28.327734 systemd[1]: Created slice kubepods-burstable-pod1d120a47_54d3_412c_8002_8bf53a9bc92f.slice - libcontainer container kubepods-burstable-pod1d120a47_54d3_412c_8002_8bf53a9bc92f.slice. Jan 30 13:06:28.332998 systemd[1]: Created slice kubepods-burstable-pod73797064_ee85_42c8_a318_e23cb83ea869.slice - libcontainer container kubepods-burstable-pod73797064_ee85_42c8_a318_e23cb83ea869.slice. Jan 30 13:06:28.516427 kubelet[2510]: E0130 13:06:28.516386 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:28.517340 kubelet[2510]: E0130 13:06:28.516586 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:28.519824 containerd[1468]: time="2025-01-30T13:06:28.519639754Z" level=info msg="CreateContainer within sandbox \"d46addd93fcff132d62b741fe226fc9126e7e9a26058aa1455e0a514761212e7\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 30 13:06:28.535319 containerd[1468]: time="2025-01-30T13:06:28.535258296Z" level=info msg="CreateContainer within sandbox \"d46addd93fcff132d62b741fe226fc9126e7e9a26058aa1455e0a514761212e7\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"d5e7a31a30b9afd3f41fd1047f2fb4ad9ed7c01b8e0d52e2d50c8dd627464b0e\"" Jan 30 13:06:28.536088 containerd[1468]: time="2025-01-30T13:06:28.536065400Z" level=info msg="StartContainer for \"d5e7a31a30b9afd3f41fd1047f2fb4ad9ed7c01b8e0d52e2d50c8dd627464b0e\"" Jan 30 13:06:28.562978 systemd[1]: Started cri-containerd-d5e7a31a30b9afd3f41fd1047f2fb4ad9ed7c01b8e0d52e2d50c8dd627464b0e.scope - libcontainer container d5e7a31a30b9afd3f41fd1047f2fb4ad9ed7c01b8e0d52e2d50c8dd627464b0e. Jan 30 13:06:28.586040 containerd[1468]: time="2025-01-30T13:06:28.584730989Z" level=info msg="StartContainer for \"d5e7a31a30b9afd3f41fd1047f2fb4ad9ed7c01b8e0d52e2d50c8dd627464b0e\" returns successfully" Jan 30 13:06:28.631673 kubelet[2510]: E0130 13:06:28.631331 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:28.632277 containerd[1468]: time="2025-01-30T13:06:28.632108482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gw48z,Uid:1d120a47-54d3-412c-8002-8bf53a9bc92f,Namespace:kube-system,Attempt:0,}" Jan 30 13:06:28.635559 kubelet[2510]: E0130 13:06:28.635480 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:28.636089 containerd[1468]: time="2025-01-30T13:06:28.636021927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9mcsk,Uid:73797064-ee85-42c8-a318-e23cb83ea869,Namespace:kube-system,Attempt:0,}" Jan 30 13:06:28.729528 containerd[1468]: time="2025-01-30T13:06:28.729443740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gw48z,Uid:1d120a47-54d3-412c-8002-8bf53a9bc92f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"58041b26b428a85f4d7b43d6a879dbdddfbbb2d04eabb2bd7e84e35e802f5c3d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 13:06:28.729879 kubelet[2510]: E0130 13:06:28.729821 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58041b26b428a85f4d7b43d6a879dbdddfbbb2d04eabb2bd7e84e35e802f5c3d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 13:06:28.730022 containerd[1468]: time="2025-01-30T13:06:28.729980569Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9mcsk,Uid:73797064-ee85-42c8-a318-e23cb83ea869,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ffa1dc1ac52d4541506089e19c9cfabad98fb6e6003875d512c3c14c40e41502\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 13:06:28.730133 kubelet[2510]: E0130 13:06:28.729917 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58041b26b428a85f4d7b43d6a879dbdddfbbb2d04eabb2bd7e84e35e802f5c3d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-gw48z" Jan 30 13:06:28.730133 kubelet[2510]: E0130 13:06:28.730112 2510 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58041b26b428a85f4d7b43d6a879dbdddfbbb2d04eabb2bd7e84e35e802f5c3d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-gw48z" Jan 30 13:06:28.730205 kubelet[2510]: E0130 13:06:28.730153 2510 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffa1dc1ac52d4541506089e19c9cfabad98fb6e6003875d512c3c14c40e41502\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 13:06:28.730205 kubelet[2510]: E0130 13:06:28.730168 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-gw48z_kube-system(1d120a47-54d3-412c-8002-8bf53a9bc92f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-gw48z_kube-system(1d120a47-54d3-412c-8002-8bf53a9bc92f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58041b26b428a85f4d7b43d6a879dbdddfbbb2d04eabb2bd7e84e35e802f5c3d\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-gw48z" podUID="1d120a47-54d3-412c-8002-8bf53a9bc92f" Jan 30 13:06:28.730286 kubelet[2510]: E0130 13:06:28.730203 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffa1dc1ac52d4541506089e19c9cfabad98fb6e6003875d512c3c14c40e41502\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-9mcsk" Jan 30 13:06:28.730286 kubelet[2510]: E0130 13:06:28.730221 2510 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffa1dc1ac52d4541506089e19c9cfabad98fb6e6003875d512c3c14c40e41502\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-9mcsk" Jan 30 13:06:28.730286 kubelet[2510]: E0130 13:06:28.730274 2510 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-9mcsk_kube-system(73797064-ee85-42c8-a318-e23cb83ea869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-9mcsk_kube-system(73797064-ee85-42c8-a318-e23cb83ea869)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffa1dc1ac52d4541506089e19c9cfabad98fb6e6003875d512c3c14c40e41502\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-9mcsk" podUID="73797064-ee85-42c8-a318-e23cb83ea869" Jan 30 13:06:29.520682 kubelet[2510]: E0130 13:06:29.520637 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:29.521126 kubelet[2510]: E0130 13:06:29.521037 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:29.534631 kubelet[2510]: I0130 13:06:29.534562 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-grwzx" podStartSLOduration=2.473993945 podStartE2EDuration="5.534544132s" podCreationTimestamp="2025-01-30 13:06:24 +0000 UTC" firstStartedPulling="2025-01-30 13:06:25.067879253 +0000 UTC m=+5.680223068" lastFinishedPulling="2025-01-30 13:06:28.12842944 +0000 UTC m=+8.740773255" observedRunningTime="2025-01-30 13:06:29.533479671 +0000 UTC m=+10.145823646" watchObservedRunningTime="2025-01-30 13:06:29.534544132 +0000 UTC m=+10.146887987" Jan 30 13:06:29.761515 systemd-networkd[1389]: flannel.1: Link UP Jan 30 13:06:29.761522 systemd-networkd[1389]: flannel.1: Gained carrier Jan 30 13:06:30.521381 kubelet[2510]: E0130 13:06:30.521343 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:30.795842 kubelet[2510]: E0130 13:06:30.795724 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:31.033906 systemd-networkd[1389]: flannel.1: Gained IPv6LL Jan 30 13:06:32.487918 update_engine[1459]: I20250130 13:06:32.487828 1459 update_attempter.cc:509] Updating boot flags... Jan 30 13:06:32.507796 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3158) Jan 30 13:06:42.479823 kubelet[2510]: E0130 13:06:42.479667 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:42.480595 containerd[1468]: time="2025-01-30T13:06:42.480079265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gw48z,Uid:1d120a47-54d3-412c-8002-8bf53a9bc92f,Namespace:kube-system,Attempt:0,}" Jan 30 13:06:42.516503 systemd-networkd[1389]: cni0: Link UP Jan 30 13:06:42.516509 systemd-networkd[1389]: cni0: Gained carrier Jan 30 13:06:42.520018 systemd-networkd[1389]: cni0: Lost carrier Jan 30 13:06:42.529381 systemd-networkd[1389]: veth73c2565b: Link UP Jan 30 13:06:42.532422 kernel: cni0: port 1(veth73c2565b) entered blocking state Jan 30 13:06:42.532529 kernel: cni0: port 1(veth73c2565b) entered disabled state Jan 30 13:06:42.532546 kernel: veth73c2565b: entered allmulticast mode Jan 30 13:06:42.533877 kernel: veth73c2565b: entered promiscuous mode Jan 30 13:06:42.533976 kernel: cni0: port 1(veth73c2565b) entered blocking state Jan 30 13:06:42.538846 kernel: cni0: port 1(veth73c2565b) entered forwarding state Jan 30 13:06:42.540903 kernel: cni0: port 1(veth73c2565b) entered disabled state Jan 30 13:06:42.550271 kernel: cni0: port 1(veth73c2565b) entered blocking state Jan 30 13:06:42.550338 kernel: cni0: port 1(veth73c2565b) entered forwarding state Jan 30 13:06:42.550377 systemd-networkd[1389]: veth73c2565b: Gained carrier Jan 30 13:06:42.550695 systemd-networkd[1389]: cni0: Gained carrier Jan 30 13:06:42.552437 containerd[1468]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000186628), "name":"cbr0", "type":"bridge"} Jan 30 13:06:42.552437 containerd[1468]: delegateAdd: netconf sent to delegate plugin: Jan 30 13:06:42.574950 containerd[1468]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-30T13:06:42.574826050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:06:42.574950 containerd[1468]: time="2025-01-30T13:06:42.574876650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:06:42.574950 containerd[1468]: time="2025-01-30T13:06:42.574887370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:42.575202 containerd[1468]: time="2025-01-30T13:06:42.574958489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:42.601005 systemd[1]: Started cri-containerd-a971f53eca84563a3a31a765d8a9882e0df2b83e80c19f1c3af82c47f746f2fa.scope - libcontainer container a971f53eca84563a3a31a765d8a9882e0df2b83e80c19f1c3af82c47f746f2fa. Jan 30 13:06:42.614483 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:06:42.631292 containerd[1468]: time="2025-01-30T13:06:42.631255333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gw48z,Uid:1d120a47-54d3-412c-8002-8bf53a9bc92f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a971f53eca84563a3a31a765d8a9882e0df2b83e80c19f1c3af82c47f746f2fa\"" Jan 30 13:06:42.632221 kubelet[2510]: E0130 13:06:42.632145 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:42.634190 containerd[1468]: time="2025-01-30T13:06:42.634148150Z" level=info msg="CreateContainer within sandbox \"a971f53eca84563a3a31a765d8a9882e0df2b83e80c19f1c3af82c47f746f2fa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:06:42.647051 containerd[1468]: time="2025-01-30T13:06:42.646998971Z" level=info msg="CreateContainer within sandbox \"a971f53eca84563a3a31a765d8a9882e0df2b83e80c19f1c3af82c47f746f2fa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d76e98064cb47847a430dd9c5abbd52e7d640b811eea2120714c5b20ccc5494\"" Jan 30 13:06:42.647548 containerd[1468]: time="2025-01-30T13:06:42.647518407Z" level=info msg="StartContainer for \"2d76e98064cb47847a430dd9c5abbd52e7d640b811eea2120714c5b20ccc5494\"" Jan 30 13:06:42.671976 systemd[1]: Started cri-containerd-2d76e98064cb47847a430dd9c5abbd52e7d640b811eea2120714c5b20ccc5494.scope - libcontainer container 2d76e98064cb47847a430dd9c5abbd52e7d640b811eea2120714c5b20ccc5494. Jan 30 13:06:42.698460 containerd[1468]: time="2025-01-30T13:06:42.698400852Z" level=info msg="StartContainer for \"2d76e98064cb47847a430dd9c5abbd52e7d640b811eea2120714c5b20ccc5494\" returns successfully" Jan 30 13:06:43.479616 kubelet[2510]: E0130 13:06:43.479440 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:43.479863 containerd[1468]: time="2025-01-30T13:06:43.479829027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9mcsk,Uid:73797064-ee85-42c8-a318-e23cb83ea869,Namespace:kube-system,Attempt:0,}" Jan 30 13:06:43.549266 kubelet[2510]: E0130 13:06:43.548925 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:43.588894 systemd-networkd[1389]: veth0520398d: Link UP Jan 30 13:06:43.591612 kubelet[2510]: I0130 13:06:43.591485 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-gw48z" podStartSLOduration=19.591466255 podStartE2EDuration="19.591466255s" podCreationTimestamp="2025-01-30 13:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:06:43.579446063 +0000 UTC m=+24.191789878" watchObservedRunningTime="2025-01-30 13:06:43.591466255 +0000 UTC m=+24.203810070" Jan 30 13:06:43.593961 kernel: cni0: port 2(veth0520398d) entered blocking state Jan 30 13:06:43.594223 kernel: cni0: port 2(veth0520398d) entered disabled state Jan 30 13:06:43.594283 kernel: veth0520398d: entered allmulticast mode Jan 30 13:06:43.596955 kernel: veth0520398d: entered promiscuous mode Jan 30 13:06:43.605799 kernel: cni0: port 2(veth0520398d) entered blocking state Jan 30 13:06:43.605883 kernel: cni0: port 2(veth0520398d) entered forwarding state Jan 30 13:06:43.606233 systemd-networkd[1389]: veth0520398d: Gained carrier Jan 30 13:06:43.608895 containerd[1468]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001148e8), "name":"cbr0", "type":"bridge"} Jan 30 13:06:43.608895 containerd[1468]: delegateAdd: netconf sent to delegate plugin: Jan 30 13:06:43.625595 containerd[1468]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-30T13:06:43.625371849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:06:43.625595 containerd[1468]: time="2025-01-30T13:06:43.625433808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:06:43.625595 containerd[1468]: time="2025-01-30T13:06:43.625459088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:43.625595 containerd[1468]: time="2025-01-30T13:06:43.625539048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:43.654948 systemd[1]: Started cri-containerd-2c2da32463c0224aa275c53b4fd04250ed452d5795cc03440aa6afc7b2fe8c5b.scope - libcontainer container 2c2da32463c0224aa275c53b4fd04250ed452d5795cc03440aa6afc7b2fe8c5b. Jan 30 13:06:43.665213 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:06:43.682200 containerd[1468]: time="2025-01-30T13:06:43.682151316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9mcsk,Uid:73797064-ee85-42c8-a318-e23cb83ea869,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c2da32463c0224aa275c53b4fd04250ed452d5795cc03440aa6afc7b2fe8c5b\"" Jan 30 13:06:43.682849 kubelet[2510]: E0130 13:06:43.682822 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:43.684905 containerd[1468]: time="2025-01-30T13:06:43.684869456Z" level=info msg="CreateContainer within sandbox \"2c2da32463c0224aa275c53b4fd04250ed452d5795cc03440aa6afc7b2fe8c5b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:06:43.695457 containerd[1468]: time="2025-01-30T13:06:43.695413820Z" level=info msg="CreateContainer within sandbox \"2c2da32463c0224aa275c53b4fd04250ed452d5795cc03440aa6afc7b2fe8c5b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b03457337a5309fb7e2753e6113c1b01ef8d3bf81a81456d72b0cd2d721db4e0\"" Jan 30 13:06:43.695914 containerd[1468]: time="2025-01-30T13:06:43.695865696Z" level=info msg="StartContainer for \"b03457337a5309fb7e2753e6113c1b01ef8d3bf81a81456d72b0cd2d721db4e0\"" Jan 30 13:06:43.722956 systemd[1]: Started cri-containerd-b03457337a5309fb7e2753e6113c1b01ef8d3bf81a81456d72b0cd2d721db4e0.scope - libcontainer container b03457337a5309fb7e2753e6113c1b01ef8d3bf81a81456d72b0cd2d721db4e0. Jan 30 13:06:43.753211 containerd[1468]: time="2025-01-30T13:06:43.753172480Z" level=info msg="StartContainer for \"b03457337a5309fb7e2753e6113c1b01ef8d3bf81a81456d72b0cd2d721db4e0\" returns successfully" Jan 30 13:06:43.769897 systemd-networkd[1389]: cni0: Gained IPv6LL Jan 30 13:06:44.409989 systemd-networkd[1389]: veth73c2565b: Gained IPv6LL Jan 30 13:06:44.553787 kubelet[2510]: E0130 13:06:44.553094 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:44.555050 kubelet[2510]: E0130 13:06:44.552871 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:44.631699 kubelet[2510]: I0130 13:06:44.631525 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9mcsk" podStartSLOduration=20.631502783 podStartE2EDuration="20.631502783s" podCreationTimestamp="2025-01-30 13:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:06:44.582819595 +0000 UTC m=+25.195163410" watchObservedRunningTime="2025-01-30 13:06:44.631502783 +0000 UTC m=+25.243846598" Jan 30 13:06:44.883299 systemd[1]: Started sshd@5-10.0.0.110:22-10.0.0.1:39084.service - OpenSSH per-connection server daemon (10.0.0.1:39084). Jan 30 13:06:44.944484 sshd[3452]: Accepted publickey for core from 10.0.0.1 port 39084 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:06:44.945249 sshd-session[3452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:06:44.954418 systemd-logind[1453]: New session 6 of user core. Jan 30 13:06:44.961003 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:06:45.093781 sshd[3469]: Connection closed by 10.0.0.1 port 39084 Jan 30 13:06:45.095359 sshd-session[3452]: pam_unix(sshd:session): session closed for user core Jan 30 13:06:45.101002 systemd[1]: sshd@5-10.0.0.110:22-10.0.0.1:39084.service: Deactivated successfully. Jan 30 13:06:45.103944 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:06:45.105473 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:06:45.106589 systemd-logind[1453]: Removed session 6. Jan 30 13:06:45.241897 systemd-networkd[1389]: veth0520398d: Gained IPv6LL Jan 30 13:06:45.554862 kubelet[2510]: E0130 13:06:45.554742 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:45.555904 kubelet[2510]: E0130 13:06:45.555443 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:06:50.110171 systemd[1]: Started sshd@6-10.0.0.110:22-10.0.0.1:39090.service - OpenSSH per-connection server daemon (10.0.0.1:39090). Jan 30 13:06:50.149421 sshd[3509]: Accepted publickey for core from 10.0.0.1 port 39090 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:06:50.150689 sshd-session[3509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:06:50.155792 systemd-logind[1453]: New session 7 of user core. Jan 30 13:06:50.172788 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:06:50.310883 sshd[3511]: Connection closed by 10.0.0.1 port 39090 Jan 30 13:06:50.312052 sshd-session[3509]: pam_unix(sshd:session): session closed for user core Jan 30 13:06:50.315955 systemd[1]: sshd@6-10.0.0.110:22-10.0.0.1:39090.service: Deactivated successfully. Jan 30 13:06:50.318519 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:06:50.320604 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:06:50.321745 systemd-logind[1453]: Removed session 7. Jan 30 13:06:55.325418 systemd[1]: Started sshd@7-10.0.0.110:22-10.0.0.1:52180.service - OpenSSH per-connection server daemon (10.0.0.1:52180). Jan 30 13:06:55.371356 sshd[3547]: Accepted publickey for core from 10.0.0.1 port 52180 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:06:55.372638 sshd-session[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:06:55.376828 systemd-logind[1453]: New session 8 of user core. Jan 30 13:06:55.388976 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:06:55.511624 sshd[3551]: Connection closed by 10.0.0.1 port 52180 Jan 30 13:06:55.511996 sshd-session[3547]: pam_unix(sshd:session): session closed for user core Jan 30 13:06:55.521370 systemd[1]: sshd@7-10.0.0.110:22-10.0.0.1:52180.service: Deactivated successfully. Jan 30 13:06:55.523269 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:06:55.524753 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:06:55.534146 systemd[1]: Started sshd@8-10.0.0.110:22-10.0.0.1:52192.service - OpenSSH per-connection server daemon (10.0.0.1:52192). Jan 30 13:06:55.536062 systemd-logind[1453]: Removed session 8. Jan 30 13:06:55.582250 sshd[3564]: Accepted publickey for core from 10.0.0.1 port 52192 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:06:55.582620 sshd-session[3564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:06:55.587101 systemd-logind[1453]: New session 9 of user core. Jan 30 13:06:55.596962 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:06:55.790397 sshd[3566]: Connection closed by 10.0.0.1 port 52192 Jan 30 13:06:55.789102 sshd-session[3564]: pam_unix(sshd:session): session closed for user core Jan 30 13:06:55.800838 systemd[1]: sshd@8-10.0.0.110:22-10.0.0.1:52192.service: Deactivated successfully. Jan 30 13:06:55.806049 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:06:55.808343 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:06:55.814104 systemd[1]: Started sshd@9-10.0.0.110:22-10.0.0.1:52194.service - OpenSSH per-connection server daemon (10.0.0.1:52194). Jan 30 13:06:55.817845 systemd-logind[1453]: Removed session 9. Jan 30 13:06:55.868350 sshd[3577]: Accepted publickey for core from 10.0.0.1 port 52194 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:06:55.869187 sshd-session[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:06:55.873042 systemd-logind[1453]: New session 10 of user core. Jan 30 13:06:55.884979 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:06:55.997835 sshd[3579]: Connection closed by 10.0.0.1 port 52194 Jan 30 13:06:55.998179 sshd-session[3577]: pam_unix(sshd:session): session closed for user core Jan 30 13:06:56.001235 systemd[1]: sshd@9-10.0.0.110:22-10.0.0.1:52194.service: Deactivated successfully. Jan 30 13:06:56.003060 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:06:56.005492 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:06:56.006759 systemd-logind[1453]: Removed session 10. Jan 30 13:07:01.027200 systemd[1]: Started sshd@10-10.0.0.110:22-10.0.0.1:52206.service - OpenSSH per-connection server daemon (10.0.0.1:52206). Jan 30 13:07:01.067510 sshd[3612]: Accepted publickey for core from 10.0.0.1 port 52206 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:07:01.068932 sshd-session[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:01.075704 systemd-logind[1453]: New session 11 of user core. Jan 30 13:07:01.088040 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:07:01.217692 sshd[3614]: Connection closed by 10.0.0.1 port 52206 Jan 30 13:07:01.218255 sshd-session[3612]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:01.229392 systemd[1]: sshd@10-10.0.0.110:22-10.0.0.1:52206.service: Deactivated successfully. Jan 30 13:07:01.232309 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:07:01.234076 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:07:01.235198 systemd[1]: Started sshd@11-10.0.0.110:22-10.0.0.1:52222.service - OpenSSH per-connection server daemon (10.0.0.1:52222). Jan 30 13:07:01.236120 systemd-logind[1453]: Removed session 11. Jan 30 13:07:01.279147 sshd[3626]: Accepted publickey for core from 10.0.0.1 port 52222 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:07:01.280427 sshd-session[3626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:01.285805 systemd-logind[1453]: New session 12 of user core. Jan 30 13:07:01.291964 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:07:01.482923 sshd[3628]: Connection closed by 10.0.0.1 port 52222 Jan 30 13:07:01.483539 sshd-session[3626]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:01.490276 systemd[1]: sshd@11-10.0.0.110:22-10.0.0.1:52222.service: Deactivated successfully. Jan 30 13:07:01.491723 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:07:01.493210 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:07:01.494482 systemd[1]: Started sshd@12-10.0.0.110:22-10.0.0.1:52238.service - OpenSSH per-connection server daemon (10.0.0.1:52238). Jan 30 13:07:01.495131 systemd-logind[1453]: Removed session 12. Jan 30 13:07:01.534260 sshd[3638]: Accepted publickey for core from 10.0.0.1 port 52238 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:07:01.535607 sshd-session[3638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:01.539320 systemd-logind[1453]: New session 13 of user core. Jan 30 13:07:01.545941 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:07:02.849132 sshd[3640]: Connection closed by 10.0.0.1 port 52238 Jan 30 13:07:02.850041 sshd-session[3638]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:02.864429 systemd[1]: sshd@12-10.0.0.110:22-10.0.0.1:52238.service: Deactivated successfully. Jan 30 13:07:02.867359 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:07:02.870649 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:07:02.881371 systemd[1]: Started sshd@13-10.0.0.110:22-10.0.0.1:55010.service - OpenSSH per-connection server daemon (10.0.0.1:55010). Jan 30 13:07:02.884748 systemd-logind[1453]: Removed session 13. Jan 30 13:07:02.935095 sshd[3657]: Accepted publickey for core from 10.0.0.1 port 55010 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:07:02.936576 sshd-session[3657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:02.940502 systemd-logind[1453]: New session 14 of user core. Jan 30 13:07:02.949953 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:07:03.189709 sshd[3659]: Connection closed by 10.0.0.1 port 55010 Jan 30 13:07:03.189926 sshd-session[3657]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:03.203826 systemd[1]: sshd@13-10.0.0.110:22-10.0.0.1:55010.service: Deactivated successfully. Jan 30 13:07:03.205986 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:07:03.208135 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:07:03.226126 systemd[1]: Started sshd@14-10.0.0.110:22-10.0.0.1:55012.service - OpenSSH per-connection server daemon (10.0.0.1:55012). Jan 30 13:07:03.227201 systemd-logind[1453]: Removed session 14. Jan 30 13:07:03.264755 sshd[3670]: Accepted publickey for core from 10.0.0.1 port 55012 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:07:03.266208 sshd-session[3670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:03.270831 systemd-logind[1453]: New session 15 of user core. Jan 30 13:07:03.283947 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:07:03.407854 sshd[3672]: Connection closed by 10.0.0.1 port 55012 Jan 30 13:07:03.407583 sshd-session[3670]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:03.411539 systemd[1]: sshd@14-10.0.0.110:22-10.0.0.1:55012.service: Deactivated successfully. Jan 30 13:07:03.413308 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:07:03.414177 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:07:03.415107 systemd-logind[1453]: Removed session 15. Jan 30 13:07:08.418021 systemd[1]: Started sshd@15-10.0.0.110:22-10.0.0.1:55016.service - OpenSSH per-connection server daemon (10.0.0.1:55016). Jan 30 13:07:08.461333 sshd[3709]: Accepted publickey for core from 10.0.0.1 port 55016 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:07:08.462791 sshd-session[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:08.472791 systemd-logind[1453]: New session 16 of user core. Jan 30 13:07:08.486555 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:07:08.611829 sshd[3711]: Connection closed by 10.0.0.1 port 55016 Jan 30 13:07:08.612385 sshd-session[3709]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:08.615908 systemd[1]: sshd@15-10.0.0.110:22-10.0.0.1:55016.service: Deactivated successfully. Jan 30 13:07:08.617749 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:07:08.618508 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:07:08.619592 systemd-logind[1453]: Removed session 16. Jan 30 13:07:13.623474 systemd[1]: Started sshd@16-10.0.0.110:22-10.0.0.1:33888.service - OpenSSH per-connection server daemon (10.0.0.1:33888). Jan 30 13:07:13.667053 sshd[3745]: Accepted publickey for core from 10.0.0.1 port 33888 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:07:13.668559 sshd-session[3745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:13.672955 systemd-logind[1453]: New session 17 of user core. Jan 30 13:07:13.688039 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:07:13.821146 sshd[3748]: Connection closed by 10.0.0.1 port 33888 Jan 30 13:07:13.822027 sshd-session[3745]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:13.824969 systemd[1]: sshd@16-10.0.0.110:22-10.0.0.1:33888.service: Deactivated successfully. Jan 30 13:07:13.827073 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:07:13.828691 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:07:13.829818 systemd-logind[1453]: Removed session 17. Jan 30 13:07:18.850258 systemd[1]: Started sshd@17-10.0.0.110:22-10.0.0.1:33890.service - OpenSSH per-connection server daemon (10.0.0.1:33890). Jan 30 13:07:18.899625 sshd[3781]: Accepted publickey for core from 10.0.0.1 port 33890 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:07:18.901173 sshd-session[3781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:18.905669 systemd-logind[1453]: New session 18 of user core. Jan 30 13:07:18.923017 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:07:19.048784 sshd[3783]: Connection closed by 10.0.0.1 port 33890 Jan 30 13:07:19.049691 sshd-session[3781]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:19.053235 systemd[1]: sshd@17-10.0.0.110:22-10.0.0.1:33890.service: Deactivated successfully. Jan 30 13:07:19.055440 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:07:19.056363 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:07:19.057224 systemd-logind[1453]: Removed session 18.