Jan 13 20:25:27.940402 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 20:25:27.940425 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025 Jan 13 20:25:27.940435 kernel: KASLR enabled Jan 13 20:25:27.940441 kernel: efi: EFI v2.7 by EDK II Jan 13 20:25:27.940447 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Jan 13 20:25:27.940452 kernel: random: crng init done Jan 13 20:25:27.940459 kernel: secureboot: Secure boot disabled Jan 13 20:25:27.940465 kernel: ACPI: Early table checksum verification disabled Jan 13 20:25:27.940471 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 13 20:25:27.940479 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:25:27.940486 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:25:27.940493 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:25:27.940499 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:25:27.940505 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:25:27.940512 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:25:27.940521 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:25:27.940527 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:25:27.940534 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:25:27.940541 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:25:27.940547 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 13 20:25:27.940554 kernel: NUMA: Failed to initialise from firmware Jan 13 20:25:27.940560 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:25:27.940567 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jan 13 20:25:27.940573 kernel: Zone ranges: Jan 13 20:25:27.940580 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:25:27.940679 kernel: DMA32 empty Jan 13 20:25:27.940691 kernel: Normal empty Jan 13 20:25:27.940698 kernel: Movable zone start for each node Jan 13 20:25:27.940704 kernel: Early memory node ranges Jan 13 20:25:27.940710 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 13 20:25:27.940716 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 13 20:25:27.940722 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 13 20:25:27.940729 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 13 20:25:27.940735 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 13 20:25:27.940741 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 13 20:25:27.940747 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 13 20:25:27.940754 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:25:27.940764 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 13 20:25:27.940770 kernel: psci: probing for conduit method from ACPI. Jan 13 20:25:27.940777 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 20:25:27.940786 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:25:27.940792 kernel: psci: Trusted OS migration not required Jan 13 20:25:27.940799 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:25:27.940807 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 20:25:27.940859 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:25:27.940866 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:25:27.940873 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 20:25:27.940883 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:25:27.940890 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:25:27.940896 kernel: CPU features: detected: Hardware dirty bit management Jan 13 20:25:27.940903 kernel: CPU features: detected: Spectre-v4 Jan 13 20:25:27.940910 kernel: CPU features: detected: Spectre-BHB Jan 13 20:25:27.940916 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 20:25:27.940925 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 20:25:27.940932 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 20:25:27.940939 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 20:25:27.940946 kernel: alternatives: applying boot alternatives Jan 13 20:25:27.940954 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:25:27.940961 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:25:27.940967 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:25:27.940974 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:25:27.940981 kernel: Fallback order for Node 0: 0 Jan 13 20:25:27.940987 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 13 20:25:27.940994 kernel: Policy zone: DMA Jan 13 20:25:27.941002 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:25:27.941009 kernel: software IO TLB: area num 4. Jan 13 20:25:27.941016 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 13 20:25:27.941023 kernel: Memory: 2386320K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185968K reserved, 0K cma-reserved) Jan 13 20:25:27.941030 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:25:27.941036 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:25:27.941044 kernel: rcu: RCU event tracing is enabled. Jan 13 20:25:27.941050 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:25:27.941057 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:25:27.941064 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:25:27.941070 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:25:27.941077 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:25:27.941085 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:25:27.941092 kernel: GICv3: 256 SPIs implemented Jan 13 20:25:27.941098 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:25:27.941105 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:25:27.941112 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 20:25:27.941119 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 20:25:27.941126 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 20:25:27.941133 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:25:27.941140 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:25:27.941147 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 13 20:25:27.941154 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 13 20:25:27.941162 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:25:27.941169 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:25:27.941189 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 20:25:27.941196 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 20:25:27.941203 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 20:25:27.941209 kernel: arm-pv: using stolen time PV Jan 13 20:25:27.941217 kernel: Console: colour dummy device 80x25 Jan 13 20:25:27.941224 kernel: ACPI: Core revision 20230628 Jan 13 20:25:27.941232 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 20:25:27.941239 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:25:27.941247 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:25:27.941254 kernel: landlock: Up and running. Jan 13 20:25:27.941261 kernel: SELinux: Initializing. Jan 13 20:25:27.941268 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:25:27.941275 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:25:27.941282 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:25:27.941289 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:25:27.941296 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:25:27.941303 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:25:27.941311 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 20:25:27.941318 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 20:25:27.941325 kernel: Remapping and enabling EFI services. Jan 13 20:25:27.941332 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:25:27.941339 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:25:27.941346 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 20:25:27.941353 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 13 20:25:27.941360 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:25:27.941367 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 20:25:27.941374 kernel: Detected PIPT I-cache on CPU2 Jan 13 20:25:27.941382 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 20:25:27.941390 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 13 20:25:27.941402 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:25:27.941412 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 20:25:27.941419 kernel: Detected PIPT I-cache on CPU3 Jan 13 20:25:27.941426 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 20:25:27.941433 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 13 20:25:27.941441 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:25:27.941448 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 20:25:27.941456 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:25:27.941464 kernel: SMP: Total of 4 processors activated. Jan 13 20:25:27.941471 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:25:27.941478 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 20:25:27.941485 kernel: CPU features: detected: Common not Private translations Jan 13 20:25:27.941493 kernel: CPU features: detected: CRC32 instructions Jan 13 20:25:27.941500 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 20:25:27.941507 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 20:25:27.941516 kernel: CPU features: detected: LSE atomic instructions Jan 13 20:25:27.941523 kernel: CPU features: detected: Privileged Access Never Jan 13 20:25:27.941530 kernel: CPU features: detected: RAS Extension Support Jan 13 20:25:27.941538 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 20:25:27.941545 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:25:27.941552 kernel: alternatives: applying system-wide alternatives Jan 13 20:25:27.941559 kernel: devtmpfs: initialized Jan 13 20:25:27.941567 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:25:27.941585 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:25:27.941603 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:25:27.941616 kernel: SMBIOS 3.0.0 present. Jan 13 20:25:27.941630 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 13 20:25:27.941684 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:25:27.941693 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:25:27.941700 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:25:27.941707 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:25:27.941715 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:25:27.941722 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jan 13 20:25:27.941732 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:25:27.941739 kernel: cpuidle: using governor menu Jan 13 20:25:27.941747 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:25:27.941754 kernel: ASID allocator initialised with 32768 entries Jan 13 20:25:27.941761 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:25:27.941768 kernel: Serial: AMBA PL011 UART driver Jan 13 20:25:27.941776 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 20:25:27.941783 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 20:25:27.941790 kernel: Modules: 508960 pages in range for PLT usage Jan 13 20:25:27.941799 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:25:27.941806 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:25:27.941820 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:25:27.941827 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:25:27.941834 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:25:27.941842 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:25:27.941849 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:25:27.941856 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:25:27.941864 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:25:27.941872 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:25:27.941880 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:25:27.941887 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:25:27.941895 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:25:27.941902 kernel: ACPI: Interpreter enabled Jan 13 20:25:27.941910 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:25:27.941917 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:25:27.941925 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 20:25:27.941932 kernel: printk: console [ttyAMA0] enabled Jan 13 20:25:27.941943 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:25:27.942086 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:25:27.942164 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:25:27.942231 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:25:27.942295 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 20:25:27.942359 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 20:25:27.942369 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 20:25:27.942379 kernel: PCI host bridge to bus 0000:00 Jan 13 20:25:27.942839 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 20:25:27.942919 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:25:27.942980 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 20:25:27.943042 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:25:27.943126 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 20:25:27.943204 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:25:27.943290 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 13 20:25:27.943359 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 13 20:25:27.943427 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:25:27.943494 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:25:27.943572 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 13 20:25:27.943765 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 13 20:25:27.943850 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 20:25:27.943927 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:25:27.943987 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 20:25:27.943997 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:25:27.944007 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:25:27.944018 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:25:27.944027 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:25:27.944035 kernel: iommu: Default domain type: Translated Jan 13 20:25:27.944043 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:25:27.944052 kernel: efivars: Registered efivars operations Jan 13 20:25:27.944060 kernel: vgaarb: loaded Jan 13 20:25:27.944067 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:25:27.944075 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:25:27.944083 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:25:27.944090 kernel: pnp: PnP ACPI init Jan 13 20:25:27.944185 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 20:25:27.944198 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:25:27.944207 kernel: NET: Registered PF_INET protocol family Jan 13 20:25:27.944216 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:25:27.944223 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:25:27.944231 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:25:27.944238 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:25:27.944246 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:25:27.944254 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:25:27.944261 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:25:27.944269 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:25:27.944278 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:25:27.944286 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:25:27.944293 kernel: kvm [1]: HYP mode not available Jan 13 20:25:27.944301 kernel: Initialise system trusted keyrings Jan 13 20:25:27.944309 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:25:27.944317 kernel: Key type asymmetric registered Jan 13 20:25:27.944324 kernel: Asymmetric key parser 'x509' registered Jan 13 20:25:27.944331 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:25:27.944339 kernel: io scheduler mq-deadline registered Jan 13 20:25:27.944347 kernel: io scheduler kyber registered Jan 13 20:25:27.944355 kernel: io scheduler bfq registered Jan 13 20:25:27.944362 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:25:27.944370 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:25:27.944378 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:25:27.944451 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 13 20:25:27.944461 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:25:27.944468 kernel: thunder_xcv, ver 1.0 Jan 13 20:25:27.944475 kernel: thunder_bgx, ver 1.0 Jan 13 20:25:27.944485 kernel: nicpf, ver 1.0 Jan 13 20:25:27.944492 kernel: nicvf, ver 1.0 Jan 13 20:25:27.944569 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:25:27.944642 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:25:27 UTC (1736799927) Jan 13 20:25:27.944672 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:25:27.944680 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 20:25:27.944687 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:25:27.944708 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:25:27.944718 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:25:27.944725 kernel: Segment Routing with IPv6 Jan 13 20:25:27.944733 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:25:27.944741 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:25:27.944748 kernel: Key type dns_resolver registered Jan 13 20:25:27.944755 kernel: registered taskstats version 1 Jan 13 20:25:27.944763 kernel: Loading compiled-in X.509 certificates Jan 13 20:25:27.944771 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb' Jan 13 20:25:27.944779 kernel: Key type .fscrypt registered Jan 13 20:25:27.944787 kernel: Key type fscrypt-provisioning registered Jan 13 20:25:27.944795 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:25:27.944802 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:25:27.944809 kernel: ima: No architecture policies found Jan 13 20:25:27.944817 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:25:27.944824 kernel: clk: Disabling unused clocks Jan 13 20:25:27.944832 kernel: Freeing unused kernel memory: 39680K Jan 13 20:25:27.944839 kernel: Run /init as init process Jan 13 20:25:27.944846 kernel: with arguments: Jan 13 20:25:27.944855 kernel: /init Jan 13 20:25:27.944862 kernel: with environment: Jan 13 20:25:27.944869 kernel: HOME=/ Jan 13 20:25:27.944877 kernel: TERM=linux Jan 13 20:25:27.944884 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:25:27.944893 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:25:27.944903 systemd[1]: Detected virtualization kvm. Jan 13 20:25:27.944911 systemd[1]: Detected architecture arm64. Jan 13 20:25:27.944920 systemd[1]: Running in initrd. Jan 13 20:25:27.944928 systemd[1]: No hostname configured, using default hostname. Jan 13 20:25:27.944935 systemd[1]: Hostname set to . Jan 13 20:25:27.944955 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:25:27.944962 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:25:27.944970 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:25:27.944978 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:25:27.944986 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:25:27.944996 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:25:27.945003 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:25:27.945012 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:25:27.945021 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:25:27.945029 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:25:27.945037 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:25:27.945045 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:25:27.945054 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:25:27.945062 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:25:27.945204 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:25:27.945214 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:25:27.945234 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:25:27.945242 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:25:27.945250 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:25:27.945258 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:25:27.945270 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:25:27.945278 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:25:27.945286 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:25:27.945295 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:25:27.945302 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:25:27.945310 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:25:27.945318 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:25:27.945326 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:25:27.945334 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:25:27.945344 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:25:27.945352 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:25:27.945360 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:25:27.945367 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:25:27.945375 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:25:27.945384 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:25:27.945394 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:25:27.945402 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:25:27.945435 systemd-journald[238]: Collecting audit messages is disabled. Jan 13 20:25:27.945458 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:25:27.945467 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:25:27.945475 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:25:27.945483 systemd-journald[238]: Journal started Jan 13 20:25:27.945502 systemd-journald[238]: Runtime Journal (/run/log/journal/45c08ad1958948c5a36791ca1f35cd49) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:25:27.928614 systemd-modules-load[239]: Inserted module 'overlay' Jan 13 20:25:27.948896 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:25:27.949605 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 13 20:25:27.950613 kernel: Bridge firewalling registered Jan 13 20:25:27.950892 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:25:27.953588 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:25:27.955015 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:25:27.956411 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:25:27.966944 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:25:27.968457 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:25:27.978828 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:25:27.979805 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:25:27.982176 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:25:27.999475 dracut-cmdline[278]: dracut-dracut-053 Jan 13 20:25:28.005018 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:25:28.014915 systemd-resolved[273]: Positive Trust Anchors: Jan 13 20:25:28.014990 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:25:28.015020 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:25:28.020537 systemd-resolved[273]: Defaulting to hostname 'linux'. Jan 13 20:25:28.025774 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:25:28.026615 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:25:28.082703 kernel: SCSI subsystem initialized Jan 13 20:25:28.087678 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:25:28.094698 kernel: iscsi: registered transport (tcp) Jan 13 20:25:28.108699 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:25:28.108735 kernel: QLogic iSCSI HBA Driver Jan 13 20:25:28.159751 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:25:28.175952 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:25:28.198024 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:25:28.198097 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:25:28.199055 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:25:28.252687 kernel: raid6: neonx8 gen() 12099 MB/s Jan 13 20:25:28.269668 kernel: raid6: neonx4 gen() 15578 MB/s Jan 13 20:25:28.286665 kernel: raid6: neonx2 gen() 13230 MB/s Jan 13 20:25:28.303666 kernel: raid6: neonx1 gen() 10463 MB/s Jan 13 20:25:28.320666 kernel: raid6: int64x8 gen() 6949 MB/s Jan 13 20:25:28.337666 kernel: raid6: int64x4 gen() 7347 MB/s Jan 13 20:25:28.354666 kernel: raid6: int64x2 gen() 6123 MB/s Jan 13 20:25:28.371667 kernel: raid6: int64x1 gen() 5055 MB/s Jan 13 20:25:28.371690 kernel: raid6: using algorithm neonx4 gen() 15578 MB/s Jan 13 20:25:28.388673 kernel: raid6: .... xor() 12362 MB/s, rmw enabled Jan 13 20:25:28.388689 kernel: raid6: using neon recovery algorithm Jan 13 20:25:28.393670 kernel: xor: measuring software checksum speed Jan 13 20:25:28.393684 kernel: 8regs : 19769 MB/sec Jan 13 20:25:28.395118 kernel: 32regs : 17873 MB/sec Jan 13 20:25:28.395136 kernel: arm64_neon : 26989 MB/sec Jan 13 20:25:28.395169 kernel: xor: using function: arm64_neon (26989 MB/sec) Jan 13 20:25:28.444681 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:25:28.454851 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:25:28.464808 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:25:28.475698 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jan 13 20:25:28.478922 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:25:28.482036 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:25:28.495867 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jan 13 20:25:28.521342 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:25:28.528890 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:25:28.570703 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:25:28.576383 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:25:28.589225 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:25:28.591027 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:25:28.592404 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:25:28.594369 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:25:28.603894 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:25:28.611671 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 13 20:25:28.625904 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:25:28.626005 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:25:28.626022 kernel: GPT:9289727 != 19775487 Jan 13 20:25:28.626032 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:25:28.626041 kernel: GPT:9289727 != 19775487 Jan 13 20:25:28.626052 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:25:28.626061 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:25:28.613698 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:25:28.625256 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:25:28.625370 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:25:28.627844 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:25:28.632938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:25:28.633078 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:25:28.634520 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:25:28.644894 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:25:28.653673 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (517) Jan 13 20:25:28.655330 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:25:28.661668 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (525) Jan 13 20:25:28.666978 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:25:28.671168 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:25:28.674865 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:25:28.675888 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:25:28.680908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:25:28.691804 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:25:28.693822 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:25:28.699162 disk-uuid[551]: Primary Header is updated. Jan 13 20:25:28.699162 disk-uuid[551]: Secondary Entries is updated. Jan 13 20:25:28.699162 disk-uuid[551]: Secondary Header is updated. Jan 13 20:25:28.702669 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:25:28.717316 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:25:29.712677 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:25:29.713136 disk-uuid[552]: The operation has completed successfully. Jan 13 20:25:29.745396 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:25:29.745505 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:25:29.756837 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:25:29.763566 sh[573]: Success Jan 13 20:25:29.784681 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:25:29.815151 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:25:29.832134 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:25:29.833721 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:25:29.843289 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78 Jan 13 20:25:29.843338 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:25:29.844201 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:25:29.844219 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:25:29.844756 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:25:29.848864 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:25:29.849996 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:25:29.861817 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:25:29.863163 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:25:29.870918 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:25:29.870974 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:25:29.870986 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:25:29.872698 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:25:29.879935 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:25:29.881633 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:25:29.887538 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:25:29.893864 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:25:29.955698 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:25:29.965809 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:25:29.990964 systemd-networkd[763]: lo: Link UP Jan 13 20:25:29.990972 systemd-networkd[763]: lo: Gained carrier Jan 13 20:25:29.993609 systemd-networkd[763]: Enumeration completed Jan 13 20:25:29.993920 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:25:29.994120 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:25:29.994123 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:25:29.995131 systemd[1]: Reached target network.target - Network. Jan 13 20:25:29.995558 systemd-networkd[763]: eth0: Link UP Jan 13 20:25:29.995561 systemd-networkd[763]: eth0: Gained carrier Jan 13 20:25:29.995568 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:25:30.007043 ignition[667]: Ignition 2.20.0 Jan 13 20:25:30.007053 ignition[667]: Stage: fetch-offline Jan 13 20:25:30.007088 ignition[667]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:25:30.007097 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:25:30.007256 ignition[667]: parsed url from cmdline: "" Jan 13 20:25:30.007259 ignition[667]: no config URL provided Jan 13 20:25:30.007264 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:25:30.007270 ignition[667]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:25:30.007296 ignition[667]: op(1): [started] loading QEMU firmware config module Jan 13 20:25:30.007301 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:25:30.012313 ignition[667]: op(1): [finished] loading QEMU firmware config module Jan 13 20:25:30.014753 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:25:30.033043 ignition[667]: parsing config with SHA512: f044468de996d29fc149e4b3230246b819f2f9bacbf3988d5966ccc4618ad982a1d970bf7e523362dcd29d212ea38755ccd63bca9f16c00a76d04edac695b802 Jan 13 20:25:30.037347 unknown[667]: fetched base config from "system" Jan 13 20:25:30.037357 unknown[667]: fetched user config from "qemu" Jan 13 20:25:30.037972 ignition[667]: fetch-offline: fetch-offline passed Jan 13 20:25:30.038050 ignition[667]: Ignition finished successfully Jan 13 20:25:30.039291 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:25:30.040718 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:25:30.051813 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:25:30.061725 ignition[771]: Ignition 2.20.0 Jan 13 20:25:30.061736 ignition[771]: Stage: kargs Jan 13 20:25:30.061892 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:25:30.061900 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:25:30.062733 ignition[771]: kargs: kargs passed Jan 13 20:25:30.062775 ignition[771]: Ignition finished successfully Jan 13 20:25:30.064624 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:25:30.074925 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:25:30.083682 ignition[780]: Ignition 2.20.0 Jan 13 20:25:30.083691 ignition[780]: Stage: disks Jan 13 20:25:30.083857 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:25:30.083868 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:25:30.085927 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:25:30.084725 ignition[780]: disks: disks passed Jan 13 20:25:30.086950 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:25:30.084767 ignition[780]: Ignition finished successfully Jan 13 20:25:30.088789 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:25:30.089592 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:25:30.090303 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:25:30.091354 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:25:30.093506 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:25:30.105806 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:25:30.109446 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:25:30.118816 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:25:30.157466 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:25:30.158647 kernel: EXT4-fs (vda9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none. Jan 13 20:25:30.158567 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:25:30.168740 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:25:30.170257 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:25:30.171478 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:25:30.171515 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:25:30.178870 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (799) Jan 13 20:25:30.178893 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:25:30.178903 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:25:30.178919 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:25:30.178929 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:25:30.171536 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:25:30.177768 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:25:30.180689 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:25:30.182211 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:25:30.224751 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:25:30.228633 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:25:30.232453 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:25:30.235112 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:25:30.298193 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:25:30.312775 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:25:30.314156 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:25:30.318683 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:25:30.331441 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:25:30.335272 ignition[912]: INFO : Ignition 2.20.0 Jan 13 20:25:30.335272 ignition[912]: INFO : Stage: mount Jan 13 20:25:30.336442 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:25:30.336442 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:25:30.336442 ignition[912]: INFO : mount: mount passed Jan 13 20:25:30.336442 ignition[912]: INFO : Ignition finished successfully Jan 13 20:25:30.337610 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:25:30.343737 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:25:30.842734 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:25:30.854922 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:25:30.860990 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) Jan 13 20:25:30.861019 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:25:30.861030 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:25:30.861703 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:25:30.864678 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:25:30.865334 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:25:30.880984 ignition[942]: INFO : Ignition 2.20.0 Jan 13 20:25:30.880984 ignition[942]: INFO : Stage: files Jan 13 20:25:30.882174 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:25:30.882174 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:25:30.882174 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:25:30.884577 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:25:30.884577 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:25:30.884577 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:25:30.884577 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:25:30.888486 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:25:30.888486 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:25:30.888486 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:25:30.884766 unknown[942]: wrote ssh authorized keys file for user: core Jan 13 20:25:31.220499 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:25:31.444363 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:25:31.446000 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:25:31.446000 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:25:31.446000 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:25:31.446000 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:25:31.446000 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:25:31.446000 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:25:31.446000 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:25:31.446000 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:25:31.446000 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:25:31.446000 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:25:31.446000 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:25:31.446000 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:25:31.446000 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:25:31.446000 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 13 20:25:31.602798 systemd-networkd[763]: eth0: Gained IPv6LL Jan 13 20:25:31.704143 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 20:25:31.960073 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:25:31.960073 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 20:25:31.962794 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:25:31.962794 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:25:31.962794 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 20:25:31.962794 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 20:25:31.962794 ignition[942]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:25:31.962794 ignition[942]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:25:31.962794 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 20:25:31.962794 ignition[942]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:25:31.984051 ignition[942]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:25:31.988536 ignition[942]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:25:31.990669 ignition[942]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:25:31.990669 ignition[942]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:25:31.990669 ignition[942]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:25:31.990669 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:25:31.990669 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:25:31.990669 ignition[942]: INFO : files: files passed Jan 13 20:25:31.990669 ignition[942]: INFO : Ignition finished successfully Jan 13 20:25:31.991290 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:25:32.002849 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:25:32.005829 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:25:32.010730 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:25:32.010857 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:25:32.013979 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:25:32.017271 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:25:32.017271 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:25:32.019615 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:25:32.019816 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:25:32.022146 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:25:32.033839 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:25:32.056207 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:25:32.056321 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:25:32.058066 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:25:32.059380 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:25:32.060707 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:25:32.061552 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:25:32.082469 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:25:32.096890 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:25:32.106872 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:25:32.107835 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:25:32.109375 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:25:32.110716 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:25:32.110847 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:25:32.112772 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:25:32.114244 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:25:32.115436 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:25:32.116711 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:25:32.118231 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:25:32.119623 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:25:32.121080 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:25:32.122562 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:25:32.124040 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:25:32.125362 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:25:32.126522 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:25:32.126674 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:25:32.128424 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:25:32.129941 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:25:32.131498 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:25:32.131599 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:25:32.133194 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:25:32.133313 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:25:32.135481 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:25:32.135600 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:25:32.137020 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:25:32.138207 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:25:32.138309 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:25:32.139752 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:25:32.141217 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:25:32.142450 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:25:32.142549 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:25:32.143872 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:25:32.143955 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:25:32.145569 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:25:32.145695 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:25:32.147007 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:25:32.147110 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:25:32.163842 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:25:32.164532 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:25:32.164694 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:25:32.167775 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:25:32.168433 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:25:32.168567 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:25:32.170149 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:25:32.170313 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:25:32.175995 ignition[997]: INFO : Ignition 2.20.0 Jan 13 20:25:32.175995 ignition[997]: INFO : Stage: umount Jan 13 20:25:32.178094 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:25:32.178094 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:25:32.178094 ignition[997]: INFO : umount: umount passed Jan 13 20:25:32.178094 ignition[997]: INFO : Ignition finished successfully Jan 13 20:25:32.178190 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:25:32.178290 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:25:32.182706 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:25:32.183405 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:25:32.183507 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:25:32.185385 systemd[1]: Stopped target network.target - Network. Jan 13 20:25:32.186356 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:25:32.186417 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:25:32.187216 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:25:32.187253 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:25:32.188793 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:25:32.188841 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:25:32.190021 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:25:32.190061 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:25:32.191590 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:25:32.192851 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:25:32.200687 systemd-networkd[763]: eth0: DHCPv6 lease lost Jan 13 20:25:32.201521 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:25:32.201711 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:25:32.203759 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:25:32.203934 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:25:32.206124 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:25:32.206173 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:25:32.222815 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:25:32.223556 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:25:32.223617 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:25:32.225338 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:25:32.225382 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:25:32.226648 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:25:32.226701 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:25:32.228545 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:25:32.228592 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:25:32.230170 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:25:32.238083 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:25:32.238201 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:25:32.239587 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:25:32.239627 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:25:32.253902 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:25:32.254079 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:25:32.255932 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:25:32.255971 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:25:32.257227 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:25:32.257260 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:25:32.258660 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:25:32.258710 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:25:32.261137 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:25:32.261183 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:25:32.263156 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:25:32.263201 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:25:32.279853 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:25:32.280621 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:25:32.280697 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:25:32.282345 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:25:32.282389 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:25:32.284587 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:25:32.284719 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:25:32.287421 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:25:32.287518 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:25:32.289330 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:25:32.291678 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:25:32.301234 systemd[1]: Switching root. Jan 13 20:25:32.332618 systemd-journald[238]: Journal stopped Jan 13 20:25:32.987717 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 13 20:25:32.987768 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:25:32.987780 kernel: SELinux: policy capability open_perms=1 Jan 13 20:25:32.987789 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:25:32.987806 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:25:32.987820 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:25:32.987830 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:25:32.987839 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:25:32.987849 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:25:32.987858 kernel: audit: type=1403 audit(1736799932.465:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:25:32.987869 systemd[1]: Successfully loaded SELinux policy in 31.412ms. Jan 13 20:25:32.987885 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.782ms. Jan 13 20:25:32.987897 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:25:32.987907 systemd[1]: Detected virtualization kvm. Jan 13 20:25:32.987919 systemd[1]: Detected architecture arm64. Jan 13 20:25:32.987930 systemd[1]: Detected first boot. Jan 13 20:25:32.987940 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:25:32.987951 zram_generator::config[1043]: No configuration found. Jan 13 20:25:32.987962 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:25:32.987972 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:25:32.987982 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:25:32.987994 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:25:32.988007 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:25:32.988017 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:25:32.988027 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:25:32.988037 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:25:32.988047 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:25:32.988058 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:25:32.988068 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:25:32.988078 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:25:32.988090 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:25:32.988100 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:25:32.988111 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:25:32.988121 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:25:32.988132 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:25:32.988142 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:25:32.988153 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 20:25:32.988163 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:25:32.988174 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:25:32.988186 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:25:32.988196 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:25:32.988207 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:25:32.988218 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:25:32.988233 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:25:32.988244 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:25:32.988254 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:25:32.988265 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:25:32.988277 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:25:32.988287 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:25:32.988299 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:25:32.988309 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:25:32.988320 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:25:32.988330 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:25:32.988341 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:25:32.988351 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:25:32.988361 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:25:32.988373 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:25:32.988384 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:25:32.988394 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:25:32.988405 systemd[1]: Reached target machines.target - Containers. Jan 13 20:25:32.988415 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:25:32.988426 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:25:32.988437 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:25:32.988449 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:25:32.988460 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:25:32.988472 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:25:32.988483 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:25:32.988494 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:25:32.988505 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:25:32.988516 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:25:32.988527 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:25:32.988538 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:25:32.988788 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:25:32.988807 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:25:32.988817 kernel: fuse: init (API version 7.39) Jan 13 20:25:32.988828 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:25:32.988838 kernel: loop: module loaded Jan 13 20:25:32.988848 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:25:32.988858 kernel: ACPI: bus type drm_connector registered Jan 13 20:25:32.988868 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:25:32.988879 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:25:32.988890 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:25:32.988902 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:25:32.988934 systemd-journald[1110]: Collecting audit messages is disabled. Jan 13 20:25:32.988958 systemd[1]: Stopped verity-setup.service. Jan 13 20:25:32.988970 systemd-journald[1110]: Journal started Jan 13 20:25:32.988991 systemd-journald[1110]: Runtime Journal (/run/log/journal/45c08ad1958948c5a36791ca1f35cd49) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:25:32.823180 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:25:32.842332 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:25:32.842719 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:25:32.991042 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:25:32.991686 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:25:32.992532 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:25:32.993480 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:25:32.994416 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:25:32.995317 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:25:32.996253 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:25:32.997236 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:25:32.998409 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:25:32.998565 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:25:32.999710 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:25:32.999848 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:25:33.000892 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:25:33.001038 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:25:33.003969 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:25:33.004151 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:25:33.005572 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:25:33.005759 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:25:33.006905 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:25:33.007035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:25:33.008157 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:25:33.009348 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:25:33.010449 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:25:33.011719 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:25:33.023674 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:25:33.037766 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:25:33.039601 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:25:33.040458 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:25:33.040487 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:25:33.042085 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:25:33.043941 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:25:33.045697 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:25:33.046510 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:25:33.047942 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:25:33.049601 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:25:33.050503 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:25:33.051345 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:25:33.052155 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:25:33.055648 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:25:33.057102 systemd-journald[1110]: Time spent on flushing to /var/log/journal/45c08ad1958948c5a36791ca1f35cd49 is 21.558ms for 853 entries. Jan 13 20:25:33.057102 systemd-journald[1110]: System Journal (/var/log/journal/45c08ad1958948c5a36791ca1f35cd49) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:25:33.088884 systemd-journald[1110]: Received client request to flush runtime journal. Jan 13 20:25:33.088922 kernel: loop0: detected capacity change from 0 to 116808 Jan 13 20:25:33.088935 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:25:33.059295 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:25:33.064460 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:25:33.067701 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:25:33.068904 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:25:33.072092 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:25:33.075271 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:25:33.076540 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:25:33.080762 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:25:33.092907 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:25:33.097142 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:25:33.098396 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:25:33.101726 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:25:33.110364 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:25:33.111426 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:25:33.114145 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:25:33.123683 kernel: loop1: detected capacity change from 0 to 113536 Jan 13 20:25:33.126968 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:25:33.130716 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:25:33.158462 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 13 20:25:33.158481 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 13 20:25:33.161849 kernel: loop2: detected capacity change from 0 to 189592 Jan 13 20:25:33.163840 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:25:33.198672 kernel: loop3: detected capacity change from 0 to 116808 Jan 13 20:25:33.204043 kernel: loop4: detected capacity change from 0 to 113536 Jan 13 20:25:33.208669 kernel: loop5: detected capacity change from 0 to 189592 Jan 13 20:25:33.212434 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:25:33.212844 (sd-merge)[1180]: Merged extensions into '/usr'. Jan 13 20:25:33.216288 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:25:33.217086 systemd[1]: Reloading... Jan 13 20:25:33.271698 zram_generator::config[1205]: No configuration found. Jan 13 20:25:33.324974 ldconfig[1150]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:25:33.364796 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:25:33.399296 systemd[1]: Reloading finished in 181 ms. Jan 13 20:25:33.432226 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:25:33.433325 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:25:33.446879 systemd[1]: Starting ensure-sysext.service... Jan 13 20:25:33.448440 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:25:33.456695 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:25:33.456710 systemd[1]: Reloading... Jan 13 20:25:33.464959 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:25:33.465207 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:25:33.466280 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:25:33.466576 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 13 20:25:33.466711 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 13 20:25:33.468855 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:25:33.468954 systemd-tmpfiles[1243]: Skipping /boot Jan 13 20:25:33.476131 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:25:33.476211 systemd-tmpfiles[1243]: Skipping /boot Jan 13 20:25:33.502768 zram_generator::config[1270]: No configuration found. Jan 13 20:25:33.583003 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:25:33.618144 systemd[1]: Reloading finished in 161 ms. Jan 13 20:25:33.636483 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:25:33.648142 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:25:33.654067 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:25:33.656167 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:25:33.658052 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:25:33.661862 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:25:33.675788 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:25:33.677587 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:25:33.686904 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:25:33.688952 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:25:33.694608 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:25:33.705521 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:25:33.707179 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Jan 13 20:25:33.710211 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:25:33.715046 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:25:33.716517 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:25:33.720224 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:25:33.722485 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:25:33.722667 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:25:33.724052 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:25:33.726026 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:25:33.727516 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:25:33.730433 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:25:33.732001 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:25:33.732133 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:25:33.738101 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:25:33.739669 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:25:33.740399 augenrules[1344]: No rules Jan 13 20:25:33.741610 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:25:33.742264 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:25:33.745908 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:25:33.756333 systemd[1]: Finished ensure-sysext.service. Jan 13 20:25:33.768887 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:25:33.769729 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:25:33.770921 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:25:33.773041 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:25:33.776140 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:25:33.780282 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:25:33.781204 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:25:33.786152 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:25:33.789695 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:25:33.794498 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1366) Jan 13 20:25:33.792898 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:25:33.793350 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:25:33.793472 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:25:33.795254 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:25:33.795454 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:25:33.796887 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:25:33.797097 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:25:33.798898 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:25:33.799125 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:25:33.810216 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 20:25:33.813765 augenrules[1372]: /sbin/augenrules: No change Jan 13 20:25:33.813536 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:25:33.813590 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:25:33.819774 systemd-resolved[1309]: Positive Trust Anchors: Jan 13 20:25:33.819844 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:25:33.819876 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:25:33.824069 augenrules[1404]: No rules Jan 13 20:25:33.825099 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:25:33.825977 systemd-resolved[1309]: Defaulting to hostname 'linux'. Jan 13 20:25:33.826779 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:25:33.835451 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:25:33.836937 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:25:33.838130 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:25:33.845873 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:25:33.864156 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:25:33.872155 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:25:33.873248 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:25:33.877387 systemd-networkd[1383]: lo: Link UP Jan 13 20:25:33.877398 systemd-networkd[1383]: lo: Gained carrier Jan 13 20:25:33.878723 systemd-networkd[1383]: Enumeration completed Jan 13 20:25:33.878799 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:25:33.879611 systemd[1]: Reached target network.target - Network. Jan 13 20:25:33.884149 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:25:33.884162 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:25:33.885191 systemd-networkd[1383]: eth0: Link UP Jan 13 20:25:33.885199 systemd-networkd[1383]: eth0: Gained carrier Jan 13 20:25:33.885212 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:25:33.888813 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:25:33.900720 systemd-networkd[1383]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:25:33.902025 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Jan 13 20:25:33.902535 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:25:33.902580 systemd-timesyncd[1387]: Initial clock synchronization to Mon 2025-01-13 20:25:33.928349 UTC. Jan 13 20:25:33.914905 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:25:33.922024 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:25:33.924993 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:25:33.942425 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:25:33.959785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:25:33.971097 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:25:33.972225 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:25:33.973044 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:25:33.973862 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:25:33.974727 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:25:33.975763 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:25:33.976599 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:25:33.977519 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:25:33.978410 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:25:33.978443 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:25:33.979082 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:25:33.980462 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:25:33.982548 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:25:33.990448 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:25:33.992343 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:25:33.993572 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:25:33.994441 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:25:33.995160 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:25:33.995849 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:25:33.995880 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:25:33.996671 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:25:33.998337 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:25:34.000775 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:25:34.001314 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:25:34.006858 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:25:34.008802 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:25:34.009016 jq[1435]: false Jan 13 20:25:34.009805 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:25:34.011367 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:25:34.013155 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:25:34.016844 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:25:34.020981 extend-filesystems[1436]: Found loop3 Jan 13 20:25:34.023754 extend-filesystems[1436]: Found loop4 Jan 13 20:25:34.023754 extend-filesystems[1436]: Found loop5 Jan 13 20:25:34.023754 extend-filesystems[1436]: Found vda Jan 13 20:25:34.023754 extend-filesystems[1436]: Found vda1 Jan 13 20:25:34.023754 extend-filesystems[1436]: Found vda2 Jan 13 20:25:34.023754 extend-filesystems[1436]: Found vda3 Jan 13 20:25:34.023754 extend-filesystems[1436]: Found usr Jan 13 20:25:34.023754 extend-filesystems[1436]: Found vda4 Jan 13 20:25:34.023754 extend-filesystems[1436]: Found vda6 Jan 13 20:25:34.023754 extend-filesystems[1436]: Found vda7 Jan 13 20:25:34.023754 extend-filesystems[1436]: Found vda9 Jan 13 20:25:34.023754 extend-filesystems[1436]: Checking size of /dev/vda9 Jan 13 20:25:34.022576 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:25:34.024374 dbus-daemon[1434]: [system] SELinux support is enabled Jan 13 20:25:34.026937 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:25:34.027294 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:25:34.028277 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:25:34.030811 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:25:34.032380 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:25:34.038712 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:25:34.048132 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:25:34.048300 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:25:34.048563 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:25:34.048749 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:25:34.057012 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:25:34.058238 extend-filesystems[1436]: Resized partition /dev/vda9 Jan 13 20:25:34.057157 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:25:34.064765 extend-filesystems[1459]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:25:34.076466 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1366) Jan 13 20:25:34.076493 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:25:34.073584 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:25:34.073617 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:25:34.077805 jq[1449]: true Jan 13 20:25:34.079880 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:25:34.079906 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:25:34.087027 update_engine[1446]: I20250113 20:25:34.086199 1446 main.cc:92] Flatcar Update Engine starting Jan 13 20:25:34.088482 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:25:34.091868 jq[1469]: true Jan 13 20:25:34.093990 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:25:34.094455 systemd-logind[1443]: New seat seat0. Jan 13 20:25:34.096181 update_engine[1446]: I20250113 20:25:34.096127 1446 update_check_scheduler.cc:74] Next update check in 3m55s Jan 13 20:25:34.101724 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:25:34.106951 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:25:34.110023 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:25:34.113133 extend-filesystems[1459]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:25:34.113133 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:25:34.113133 extend-filesystems[1459]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:25:34.122631 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Jan 13 20:25:34.123411 tar[1458]: linux-arm64/helm Jan 13 20:25:34.114058 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:25:34.114261 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:25:34.129960 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:25:34.165249 bash[1490]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:25:34.171586 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:25:34.173337 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:25:34.185241 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:25:34.273189 containerd[1460]: time="2025-01-13T20:25:34.273109061Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:25:34.304119 containerd[1460]: time="2025-01-13T20:25:34.304080931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:25:34.305762 containerd[1460]: time="2025-01-13T20:25:34.305668654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:25:34.305762 containerd[1460]: time="2025-01-13T20:25:34.305704643Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:25:34.305762 containerd[1460]: time="2025-01-13T20:25:34.305725660Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:25:34.305938 containerd[1460]: time="2025-01-13T20:25:34.305917415Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:25:34.305967 containerd[1460]: time="2025-01-13T20:25:34.305942515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:25:34.306014 containerd[1460]: time="2025-01-13T20:25:34.305999201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:25:34.306033 containerd[1460]: time="2025-01-13T20:25:34.306015334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:25:34.306195 containerd[1460]: time="2025-01-13T20:25:34.306168578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:25:34.306195 containerd[1460]: time="2025-01-13T20:25:34.306188193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:25:34.306237 containerd[1460]: time="2025-01-13T20:25:34.306201564Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:25:34.306237 containerd[1460]: time="2025-01-13T20:25:34.306210652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:25:34.306297 containerd[1460]: time="2025-01-13T20:25:34.306283310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:25:34.306487 containerd[1460]: time="2025-01-13T20:25:34.306469581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:25:34.306579 containerd[1460]: time="2025-01-13T20:25:34.306563977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:25:34.306599 containerd[1460]: time="2025-01-13T20:25:34.306581191Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:25:34.306683 containerd[1460]: time="2025-01-13T20:25:34.306668542Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:25:34.306730 containerd[1460]: time="2025-01-13T20:25:34.306717541Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:25:34.309690 containerd[1460]: time="2025-01-13T20:25:34.309663922Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:25:34.309729 containerd[1460]: time="2025-01-13T20:25:34.309708238Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:25:34.309729 containerd[1460]: time="2025-01-13T20:25:34.309725172Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:25:34.309763 containerd[1460]: time="2025-01-13T20:25:34.309741105Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:25:34.309763 containerd[1460]: time="2025-01-13T20:25:34.309755236Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:25:34.309920 containerd[1460]: time="2025-01-13T20:25:34.309903036Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:25:34.310125 containerd[1460]: time="2025-01-13T20:25:34.310109362Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:25:34.310222 containerd[1460]: time="2025-01-13T20:25:34.310208322Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:25:34.310245 containerd[1460]: time="2025-01-13T20:25:34.310227938Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:25:34.310245 containerd[1460]: time="2025-01-13T20:25:34.310241789Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:25:34.310278 containerd[1460]: time="2025-01-13T20:25:34.310254119Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:25:34.310278 containerd[1460]: time="2025-01-13T20:25:34.310271774Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:25:34.310310 containerd[1460]: time="2025-01-13T20:25:34.310283503Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:25:34.310310 containerd[1460]: time="2025-01-13T20:25:34.310295993Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:25:34.310351 containerd[1460]: time="2025-01-13T20:25:34.310309044Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:25:34.310351 containerd[1460]: time="2025-01-13T20:25:34.310320933Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:25:34.310351 containerd[1460]: time="2025-01-13T20:25:34.310333103Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:25:34.310351 containerd[1460]: time="2025-01-13T20:25:34.310343351Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:25:34.310416 containerd[1460]: time="2025-01-13T20:25:34.310360886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:25:34.310416 containerd[1460]: time="2025-01-13T20:25:34.310374617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:25:34.310416 containerd[1460]: time="2025-01-13T20:25:34.310387067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:25:34.310416 containerd[1460]: time="2025-01-13T20:25:34.310411406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:25:34.310486 containerd[1460]: time="2025-01-13T20:25:34.310423776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:25:34.310486 containerd[1460]: time="2025-01-13T20:25:34.310435666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:25:34.310486 containerd[1460]: time="2025-01-13T20:25:34.310446915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:25:34.310486 containerd[1460]: time="2025-01-13T20:25:34.310458484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:25:34.310486 containerd[1460]: time="2025-01-13T20:25:34.310470935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:25:34.310486 containerd[1460]: time="2025-01-13T20:25:34.310484385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:25:34.310580 containerd[1460]: time="2025-01-13T20:25:34.310496115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:25:34.310580 containerd[1460]: time="2025-01-13T20:25:34.310507564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:25:34.310580 containerd[1460]: time="2025-01-13T20:25:34.310518053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:25:34.310580 containerd[1460]: time="2025-01-13T20:25:34.310531744Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:25:34.310580 containerd[1460]: time="2025-01-13T20:25:34.310551239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:25:34.310580 containerd[1460]: time="2025-01-13T20:25:34.310564890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:25:34.310580 containerd[1460]: time="2025-01-13T20:25:34.310575019Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:25:34.310818 containerd[1460]: time="2025-01-13T20:25:34.310802843Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:25:34.310844 containerd[1460]: time="2025-01-13T20:25:34.310824740Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:25:34.310844 containerd[1460]: time="2025-01-13T20:25:34.310836590Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:25:34.310915 containerd[1460]: time="2025-01-13T20:25:34.310848159Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:25:34.310939 containerd[1460]: time="2025-01-13T20:25:34.310915294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:25:34.310939 containerd[1460]: time="2025-01-13T20:25:34.310929025Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:25:34.310973 containerd[1460]: time="2025-01-13T20:25:34.310938272Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:25:34.310973 containerd[1460]: time="2025-01-13T20:25:34.310948080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:25:34.311304 containerd[1460]: time="2025-01-13T20:25:34.311264376Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:25:34.311398 containerd[1460]: time="2025-01-13T20:25:34.311314897Z" level=info msg="Connect containerd service" Jan 13 20:25:34.311398 containerd[1460]: time="2025-01-13T20:25:34.311344801Z" level=info msg="using legacy CRI server" Jan 13 20:25:34.311398 containerd[1460]: time="2025-01-13T20:25:34.311351206Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:25:34.311590 containerd[1460]: time="2025-01-13T20:25:34.311576468Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:25:34.312335 containerd[1460]: time="2025-01-13T20:25:34.312297931Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:25:34.312523 containerd[1460]: time="2025-01-13T20:25:34.312489886Z" level=info msg="Start subscribing containerd event" Jan 13 20:25:34.312546 containerd[1460]: time="2025-01-13T20:25:34.312539606Z" level=info msg="Start recovering state" Jan 13 20:25:34.312792 containerd[1460]: time="2025-01-13T20:25:34.312766750Z" level=info msg="Start event monitor" Jan 13 20:25:34.312792 containerd[1460]: time="2025-01-13T20:25:34.312792010Z" level=info msg="Start snapshots syncer" Jan 13 20:25:34.312849 containerd[1460]: time="2025-01-13T20:25:34.312802579Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:25:34.312849 containerd[1460]: time="2025-01-13T20:25:34.312809464Z" level=info msg="Start streaming server" Jan 13 20:25:34.313484 containerd[1460]: time="2025-01-13T20:25:34.313456828Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:25:34.313515 containerd[1460]: time="2025-01-13T20:25:34.313505187Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:25:34.313638 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:25:34.316445 containerd[1460]: time="2025-01-13T20:25:34.315835670Z" level=info msg="containerd successfully booted in 0.045349s" Jan 13 20:25:34.471444 tar[1458]: linux-arm64/LICENSE Jan 13 20:25:34.471444 tar[1458]: linux-arm64/README.md Jan 13 20:25:34.483968 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:25:35.048096 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:25:35.068453 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:25:35.076972 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:25:35.083025 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:25:35.083228 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:25:35.085965 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:25:35.097111 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:25:35.113939 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:25:35.115932 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 20:25:35.116880 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:25:35.826921 systemd-networkd[1383]: eth0: Gained IPv6LL Jan 13 20:25:35.829366 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:25:35.830854 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:25:35.844283 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:25:35.846865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:25:35.848912 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:25:35.863388 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:25:35.863565 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:25:35.866662 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:25:35.869132 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:25:36.350310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:25:36.351583 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:25:36.354137 (kubelet)[1547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:25:36.356726 systemd[1]: Startup finished in 607ms (kernel) + 4.732s (initrd) + 3.925s (userspace) = 9.265s. Jan 13 20:25:36.813291 kubelet[1547]: E0113 20:25:36.813178 1547 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:25:36.815681 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:25:36.815829 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:25:40.149309 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:25:40.150405 systemd[1]: Started sshd@0-10.0.0.128:22-10.0.0.1:58778.service - OpenSSH per-connection server daemon (10.0.0.1:58778). Jan 13 20:25:40.204903 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 58778 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:25:40.206462 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:40.215280 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:25:40.225936 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:25:40.227770 systemd-logind[1443]: New session 1 of user core. Jan 13 20:25:40.234569 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:25:40.236776 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:25:40.244755 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:25:40.324242 systemd[1564]: Queued start job for default target default.target. Jan 13 20:25:40.335561 systemd[1564]: Created slice app.slice - User Application Slice. Jan 13 20:25:40.335605 systemd[1564]: Reached target paths.target - Paths. Jan 13 20:25:40.335617 systemd[1564]: Reached target timers.target - Timers. Jan 13 20:25:40.336792 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:25:40.346317 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:25:40.346390 systemd[1564]: Reached target sockets.target - Sockets. Jan 13 20:25:40.346403 systemd[1564]: Reached target basic.target - Basic System. Jan 13 20:25:40.346440 systemd[1564]: Reached target default.target - Main User Target. Jan 13 20:25:40.346467 systemd[1564]: Startup finished in 96ms. Jan 13 20:25:40.346783 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:25:40.348095 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:25:40.407678 systemd[1]: Started sshd@1-10.0.0.128:22-10.0.0.1:58794.service - OpenSSH per-connection server daemon (10.0.0.1:58794). Jan 13 20:25:40.446969 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 58794 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:25:40.448173 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:40.452127 systemd-logind[1443]: New session 2 of user core. Jan 13 20:25:40.458810 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:25:40.510540 sshd[1577]: Connection closed by 10.0.0.1 port 58794 Jan 13 20:25:40.510963 sshd-session[1575]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:40.523021 systemd[1]: sshd@1-10.0.0.128:22-10.0.0.1:58794.service: Deactivated successfully. Jan 13 20:25:40.525079 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:25:40.526165 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:25:40.537056 systemd[1]: Started sshd@2-10.0.0.128:22-10.0.0.1:58802.service - OpenSSH per-connection server daemon (10.0.0.1:58802). Jan 13 20:25:40.538112 systemd-logind[1443]: Removed session 2. Jan 13 20:25:40.572029 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 58802 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:25:40.573187 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:40.577131 systemd-logind[1443]: New session 3 of user core. Jan 13 20:25:40.585811 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:25:40.633060 sshd[1584]: Connection closed by 10.0.0.1 port 58802 Jan 13 20:25:40.633374 sshd-session[1582]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:40.646947 systemd[1]: sshd@2-10.0.0.128:22-10.0.0.1:58802.service: Deactivated successfully. Jan 13 20:25:40.648404 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:25:40.650795 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:25:40.664928 systemd[1]: Started sshd@3-10.0.0.128:22-10.0.0.1:58806.service - OpenSSH per-connection server daemon (10.0.0.1:58806). Jan 13 20:25:40.666007 systemd-logind[1443]: Removed session 3. Jan 13 20:25:40.703945 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 58806 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:25:40.705185 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:40.709229 systemd-logind[1443]: New session 4 of user core. Jan 13 20:25:40.716784 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:25:40.768060 sshd[1591]: Connection closed by 10.0.0.1 port 58806 Jan 13 20:25:40.768426 sshd-session[1589]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:40.774864 systemd[1]: sshd@3-10.0.0.128:22-10.0.0.1:58806.service: Deactivated successfully. Jan 13 20:25:40.776170 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:25:40.777379 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:25:40.778440 systemd[1]: Started sshd@4-10.0.0.128:22-10.0.0.1:58818.service - OpenSSH per-connection server daemon (10.0.0.1:58818). Jan 13 20:25:40.780177 systemd-logind[1443]: Removed session 4. Jan 13 20:25:40.817147 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 58818 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:25:40.818417 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:40.822336 systemd-logind[1443]: New session 5 of user core. Jan 13 20:25:40.836833 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:25:40.901419 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:25:40.901741 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:25:41.218941 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:25:41.219063 (dockerd)[1620]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:25:41.483732 dockerd[1620]: time="2025-01-13T20:25:41.483282065Z" level=info msg="Starting up" Jan 13 20:25:41.635975 dockerd[1620]: time="2025-01-13T20:25:41.635930019Z" level=info msg="Loading containers: start." Jan 13 20:25:41.771681 kernel: Initializing XFRM netlink socket Jan 13 20:25:41.832103 systemd-networkd[1383]: docker0: Link UP Jan 13 20:25:41.862989 dockerd[1620]: time="2025-01-13T20:25:41.862940221Z" level=info msg="Loading containers: done." Jan 13 20:25:41.875415 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3536653759-merged.mount: Deactivated successfully. Jan 13 20:25:41.877407 dockerd[1620]: time="2025-01-13T20:25:41.877362915Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:25:41.877478 dockerd[1620]: time="2025-01-13T20:25:41.877453614Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:25:41.877563 dockerd[1620]: time="2025-01-13T20:25:41.877546954Z" level=info msg="Daemon has completed initialization" Jan 13 20:25:41.903137 dockerd[1620]: time="2025-01-13T20:25:41.903080680Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:25:41.903245 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:25:42.455908 containerd[1460]: time="2025-01-13T20:25:42.455856156Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 20:25:43.133190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2672646201.mount: Deactivated successfully. Jan 13 20:25:43.949045 containerd[1460]: time="2025-01-13T20:25:43.948995187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:43.949463 containerd[1460]: time="2025-01-13T20:25:43.949423407Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615587" Jan 13 20:25:43.950231 containerd[1460]: time="2025-01-13T20:25:43.950169900Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:43.953174 containerd[1460]: time="2025-01-13T20:25:43.953144627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:43.954323 containerd[1460]: time="2025-01-13T20:25:43.954284720Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 1.498389059s" Jan 13 20:25:43.954374 containerd[1460]: time="2025-01-13T20:25:43.954322663Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Jan 13 20:25:43.955160 containerd[1460]: time="2025-01-13T20:25:43.955130153Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 20:25:45.403061 containerd[1460]: time="2025-01-13T20:25:45.403009336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:45.403701 containerd[1460]: time="2025-01-13T20:25:45.403638694Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470098" Jan 13 20:25:45.404267 containerd[1460]: time="2025-01-13T20:25:45.404234354Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:45.407745 containerd[1460]: time="2025-01-13T20:25:45.407709015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:45.408502 containerd[1460]: time="2025-01-13T20:25:45.408410134Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 1.453247082s" Jan 13 20:25:45.408502 containerd[1460]: time="2025-01-13T20:25:45.408439071Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Jan 13 20:25:45.409283 containerd[1460]: time="2025-01-13T20:25:45.409156040Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 20:25:46.516298 containerd[1460]: time="2025-01-13T20:25:46.516241065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:46.516756 containerd[1460]: time="2025-01-13T20:25:46.516701759Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024204" Jan 13 20:25:46.517491 containerd[1460]: time="2025-01-13T20:25:46.517457577Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:46.520268 containerd[1460]: time="2025-01-13T20:25:46.520236351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:46.521403 containerd[1460]: time="2025-01-13T20:25:46.521371498Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 1.112186122s" Jan 13 20:25:46.521445 containerd[1460]: time="2025-01-13T20:25:46.521403196Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Jan 13 20:25:46.522374 containerd[1460]: time="2025-01-13T20:25:46.522340233Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 20:25:47.066099 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:25:47.080953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:25:47.183684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:25:47.187727 (kubelet)[1885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:25:47.232081 kubelet[1885]: E0113 20:25:47.232026 1885 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:25:47.234625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:25:47.234791 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:25:47.480618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1918649869.mount: Deactivated successfully. Jan 13 20:25:47.818470 containerd[1460]: time="2025-01-13T20:25:47.818340759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:47.819105 containerd[1460]: time="2025-01-13T20:25:47.819056903Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771428" Jan 13 20:25:47.819727 containerd[1460]: time="2025-01-13T20:25:47.819696084Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:47.821796 containerd[1460]: time="2025-01-13T20:25:47.821766352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:47.822897 containerd[1460]: time="2025-01-13T20:25:47.822801866Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.30041961s" Jan 13 20:25:47.822926 containerd[1460]: time="2025-01-13T20:25:47.822897437Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Jan 13 20:25:47.823419 containerd[1460]: time="2025-01-13T20:25:47.823387579Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:25:48.556109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2434709274.mount: Deactivated successfully. Jan 13 20:25:49.090208 containerd[1460]: time="2025-01-13T20:25:49.090156866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:49.090810 containerd[1460]: time="2025-01-13T20:25:49.090773535Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 13 20:25:49.091537 containerd[1460]: time="2025-01-13T20:25:49.091504662Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:49.094590 containerd[1460]: time="2025-01-13T20:25:49.094558116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:49.095874 containerd[1460]: time="2025-01-13T20:25:49.095837118Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.27241604s" Jan 13 20:25:49.095874 containerd[1460]: time="2025-01-13T20:25:49.095868734Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:25:49.096295 containerd[1460]: time="2025-01-13T20:25:49.096272216Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 20:25:49.502606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1174038143.mount: Deactivated successfully. Jan 13 20:25:49.506909 containerd[1460]: time="2025-01-13T20:25:49.506869301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:49.507320 containerd[1460]: time="2025-01-13T20:25:49.507273744Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 13 20:25:49.508123 containerd[1460]: time="2025-01-13T20:25:49.508091754Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:49.510828 containerd[1460]: time="2025-01-13T20:25:49.510775942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:49.511551 containerd[1460]: time="2025-01-13T20:25:49.511523157Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 415.224287ms" Jan 13 20:25:49.511618 containerd[1460]: time="2025-01-13T20:25:49.511554053Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 13 20:25:49.512256 containerd[1460]: time="2025-01-13T20:25:49.512086520Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 20:25:50.028801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1909856795.mount: Deactivated successfully. Jan 13 20:25:52.127954 containerd[1460]: time="2025-01-13T20:25:52.127756098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:52.128864 containerd[1460]: time="2025-01-13T20:25:52.128571870Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Jan 13 20:25:52.129625 containerd[1460]: time="2025-01-13T20:25:52.129592816Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:52.132907 containerd[1460]: time="2025-01-13T20:25:52.132851264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:52.134142 containerd[1460]: time="2025-01-13T20:25:52.134113360Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.621997785s" Jan 13 20:25:52.134191 containerd[1460]: time="2025-01-13T20:25:52.134147375Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 13 20:25:56.542850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:25:56.552899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:25:56.577310 systemd[1]: Reloading requested from client PID 2033 ('systemctl') (unit session-5.scope)... Jan 13 20:25:56.577326 systemd[1]: Reloading... Jan 13 20:25:56.652772 zram_generator::config[2076]: No configuration found. Jan 13 20:25:56.768408 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:25:56.819985 systemd[1]: Reloading finished in 242 ms. Jan 13 20:25:56.859161 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:25:56.862306 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:25:56.862484 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:25:56.863826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:25:56.970477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:25:56.973917 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:25:57.009045 kubelet[2119]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:25:57.009045 kubelet[2119]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:25:57.009045 kubelet[2119]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:25:57.009351 kubelet[2119]: I0113 20:25:57.009052 2119 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:25:57.735947 kubelet[2119]: I0113 20:25:57.735896 2119 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:25:57.735947 kubelet[2119]: I0113 20:25:57.735930 2119 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:25:57.736190 kubelet[2119]: I0113 20:25:57.736158 2119 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:25:57.763029 kubelet[2119]: E0113 20:25:57.762978 2119 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:25:57.763900 kubelet[2119]: I0113 20:25:57.763802 2119 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:25:57.771959 kubelet[2119]: E0113 20:25:57.771920 2119 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:25:57.771959 kubelet[2119]: I0113 20:25:57.771951 2119 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:25:57.775446 kubelet[2119]: I0113 20:25:57.775419 2119 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:25:57.777792 kubelet[2119]: I0113 20:25:57.777771 2119 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:25:57.777922 kubelet[2119]: I0113 20:25:57.777891 2119 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:25:57.778080 kubelet[2119]: I0113 20:25:57.777918 2119 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:25:57.778232 kubelet[2119]: I0113 20:25:57.778216 2119 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:25:57.778232 kubelet[2119]: I0113 20:25:57.778228 2119 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:25:57.778425 kubelet[2119]: I0113 20:25:57.778405 2119 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:25:57.781315 kubelet[2119]: I0113 20:25:57.781273 2119 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:25:57.781315 kubelet[2119]: I0113 20:25:57.781317 2119 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:25:57.781446 kubelet[2119]: I0113 20:25:57.781348 2119 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:25:57.781446 kubelet[2119]: I0113 20:25:57.781366 2119 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:25:57.789956 kubelet[2119]: W0113 20:25:57.789893 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 20:25:57.789956 kubelet[2119]: E0113 20:25:57.789957 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:25:57.791515 kubelet[2119]: W0113 20:25:57.791467 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 20:25:57.791559 kubelet[2119]: E0113 20:25:57.791523 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:25:57.791845 kubelet[2119]: I0113 20:25:57.791810 2119 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:25:57.794552 kubelet[2119]: I0113 20:25:57.794536 2119 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:25:57.795899 kubelet[2119]: W0113 20:25:57.795875 2119 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:25:57.796666 kubelet[2119]: I0113 20:25:57.796635 2119 server.go:1269] "Started kubelet" Jan 13 20:25:57.798228 kubelet[2119]: I0113 20:25:57.796874 2119 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:25:57.798228 kubelet[2119]: I0113 20:25:57.798091 2119 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:25:57.801675 kubelet[2119]: I0113 20:25:57.800516 2119 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:25:57.801675 kubelet[2119]: I0113 20:25:57.800780 2119 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:25:57.801675 kubelet[2119]: I0113 20:25:57.801117 2119 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:25:57.801675 kubelet[2119]: I0113 20:25:57.801490 2119 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:25:57.803819 kubelet[2119]: I0113 20:25:57.803800 2119 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:25:57.804052 kubelet[2119]: I0113 20:25:57.804036 2119 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:25:57.804164 kubelet[2119]: I0113 20:25:57.804153 2119 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:25:57.804561 kubelet[2119]: W0113 20:25:57.804522 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 20:25:57.804687 kubelet[2119]: E0113 20:25:57.804671 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:25:57.805760 kubelet[2119]: E0113 20:25:57.805739 2119 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:25:57.805928 kubelet[2119]: E0113 20:25:57.805891 2119 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:25:57.807885 kubelet[2119]: I0113 20:25:57.807851 2119 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:25:57.807885 kubelet[2119]: I0113 20:25:57.807874 2119 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:25:57.808042 kubelet[2119]: E0113 20:25:57.804961 2119 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.128:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.128:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5a5c126172be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:25:57.796614846 +0000 UTC m=+0.819504638,LastTimestamp:2025-01-13 20:25:57.796614846 +0000 UTC m=+0.819504638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:25:57.808042 kubelet[2119]: I0113 20:25:57.808001 2119 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:25:57.808141 kubelet[2119]: E0113 20:25:57.808061 2119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="200ms" Jan 13 20:25:57.821894 kubelet[2119]: I0113 20:25:57.821865 2119 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:25:57.821894 kubelet[2119]: I0113 20:25:57.821885 2119 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:25:57.821894 kubelet[2119]: I0113 20:25:57.821902 2119 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:25:57.823079 kubelet[2119]: I0113 20:25:57.823050 2119 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:25:57.824467 kubelet[2119]: I0113 20:25:57.824427 2119 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:25:57.824467 kubelet[2119]: I0113 20:25:57.824453 2119 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:25:57.824467 kubelet[2119]: I0113 20:25:57.824471 2119 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:25:57.824593 kubelet[2119]: E0113 20:25:57.824513 2119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:25:57.825425 kubelet[2119]: W0113 20:25:57.825220 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 20:25:57.825425 kubelet[2119]: E0113 20:25:57.825257 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:25:57.906178 kubelet[2119]: E0113 20:25:57.906145 2119 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:25:57.922753 kubelet[2119]: I0113 20:25:57.922737 2119 policy_none.go:49] "None policy: Start" Jan 13 20:25:57.923801 kubelet[2119]: I0113 20:25:57.923782 2119 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:25:57.924125 kubelet[2119]: I0113 20:25:57.923947 2119 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:25:57.925014 kubelet[2119]: E0113 20:25:57.924992 2119 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:25:57.930841 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:25:57.944627 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:25:57.948808 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:25:57.960477 kubelet[2119]: I0113 20:25:57.960410 2119 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:25:57.960908 kubelet[2119]: I0113 20:25:57.960600 2119 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:25:57.960908 kubelet[2119]: I0113 20:25:57.960617 2119 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:25:57.960908 kubelet[2119]: I0113 20:25:57.960843 2119 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:25:57.962229 kubelet[2119]: E0113 20:25:57.962194 2119 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:25:58.009184 kubelet[2119]: E0113 20:25:58.009074 2119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="400ms" Jan 13 20:25:58.062343 kubelet[2119]: I0113 20:25:58.062317 2119 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:25:58.062699 kubelet[2119]: E0113 20:25:58.062677 2119 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Jan 13 20:25:58.154937 systemd[1]: Created slice kubepods-burstable-pod192d950ffdf1f6a4a9fda3f82e22a1fb.slice - libcontainer container kubepods-burstable-pod192d950ffdf1f6a4a9fda3f82e22a1fb.slice. Jan 13 20:25:58.173138 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Jan 13 20:25:58.187468 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Jan 13 20:25:58.206699 kubelet[2119]: I0113 20:25:58.206672 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/192d950ffdf1f6a4a9fda3f82e22a1fb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"192d950ffdf1f6a4a9fda3f82e22a1fb\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:25:58.206699 kubelet[2119]: I0113 20:25:58.206705 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:25:58.206980 kubelet[2119]: I0113 20:25:58.206726 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:25:58.206980 kubelet[2119]: I0113 20:25:58.206742 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/192d950ffdf1f6a4a9fda3f82e22a1fb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"192d950ffdf1f6a4a9fda3f82e22a1fb\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:25:58.206980 kubelet[2119]: I0113 20:25:58.206762 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/192d950ffdf1f6a4a9fda3f82e22a1fb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"192d950ffdf1f6a4a9fda3f82e22a1fb\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:25:58.206980 kubelet[2119]: I0113 20:25:58.206778 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:25:58.206980 kubelet[2119]: I0113 20:25:58.206800 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:25:58.207087 kubelet[2119]: I0113 20:25:58.206847 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:25:58.207087 kubelet[2119]: I0113 20:25:58.206882 2119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:25:58.264017 kubelet[2119]: I0113 20:25:58.263857 2119 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:25:58.264246 kubelet[2119]: E0113 20:25:58.264199 2119 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Jan 13 20:25:58.410236 kubelet[2119]: E0113 20:25:58.410196 2119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="800ms" Jan 13 20:25:58.476690 kubelet[2119]: E0113 20:25:58.476632 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:25:58.479797 containerd[1460]: time="2025-01-13T20:25:58.479713054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:192d950ffdf1f6a4a9fda3f82e22a1fb,Namespace:kube-system,Attempt:0,}" Jan 13 20:25:58.484975 kubelet[2119]: E0113 20:25:58.484876 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:25:58.485488 containerd[1460]: time="2025-01-13T20:25:58.485278514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Jan 13 20:25:58.489882 kubelet[2119]: E0113 20:25:58.489861 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:25:58.490336 containerd[1460]: time="2025-01-13T20:25:58.490308852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Jan 13 20:25:58.665347 kubelet[2119]: I0113 20:25:58.665316 2119 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:25:58.665629 kubelet[2119]: E0113 20:25:58.665601 2119 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Jan 13 20:25:58.670084 kubelet[2119]: W0113 20:25:58.669999 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 20:25:58.670084 kubelet[2119]: E0113 20:25:58.670057 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:25:58.677609 kubelet[2119]: W0113 20:25:58.677556 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 20:25:58.677609 kubelet[2119]: E0113 20:25:58.677586 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:25:58.719557 kubelet[2119]: W0113 20:25:58.719480 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 20:25:58.719557 kubelet[2119]: E0113 20:25:58.719535 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:25:58.752128 kubelet[2119]: W0113 20:25:58.752050 2119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 20:25:58.752128 kubelet[2119]: E0113 20:25:58.752099 2119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:25:58.973263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount686789094.mount: Deactivated successfully. Jan 13 20:25:58.978416 containerd[1460]: time="2025-01-13T20:25:58.978370068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:25:58.980137 containerd[1460]: time="2025-01-13T20:25:58.980079673Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:25:58.980753 containerd[1460]: time="2025-01-13T20:25:58.980722115Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:25:58.981723 containerd[1460]: time="2025-01-13T20:25:58.981688480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:25:58.982491 containerd[1460]: time="2025-01-13T20:25:58.982459251Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:25:58.984300 containerd[1460]: time="2025-01-13T20:25:58.984244925Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 13 20:25:58.985323 containerd[1460]: time="2025-01-13T20:25:58.985286318Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:25:58.987006 containerd[1460]: time="2025-01-13T20:25:58.986981077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:25:58.988080 containerd[1460]: time="2025-01-13T20:25:58.988044879Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 497.678564ms" Jan 13 20:25:58.989506 containerd[1460]: time="2025-01-13T20:25:58.989375941Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 504.038524ms" Jan 13 20:25:58.990886 containerd[1460]: time="2025-01-13T20:25:58.990678993Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 510.886308ms" Jan 13 20:25:59.113109 containerd[1460]: time="2025-01-13T20:25:59.112976059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:25:59.113109 containerd[1460]: time="2025-01-13T20:25:59.113030919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:25:59.113109 containerd[1460]: time="2025-01-13T20:25:59.113046044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:25:59.113109 containerd[1460]: time="2025-01-13T20:25:59.112938765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:25:59.113109 containerd[1460]: time="2025-01-13T20:25:59.113032599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:25:59.113109 containerd[1460]: time="2025-01-13T20:25:59.113048365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:25:59.113336 containerd[1460]: time="2025-01-13T20:25:59.113132556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:25:59.114277 containerd[1460]: time="2025-01-13T20:25:59.114199586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:25:59.115358 containerd[1460]: time="2025-01-13T20:25:59.115239566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:25:59.115845 containerd[1460]: time="2025-01-13T20:25:59.115679127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:25:59.115845 containerd[1460]: time="2025-01-13T20:25:59.115758356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:25:59.115935 containerd[1460]: time="2025-01-13T20:25:59.115900808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:25:59.138871 systemd[1]: Started cri-containerd-2befee54a3a738270f7b0ecda3ffbb0a80c93c40931c14d6dbb30e65b4122cde.scope - libcontainer container 2befee54a3a738270f7b0ecda3ffbb0a80c93c40931c14d6dbb30e65b4122cde. Jan 13 20:25:59.140068 systemd[1]: Started cri-containerd-ab56b4383f0c29dfae60723cac1f1bce3d77b8a14640e0b8eb18498eeb2cec36.scope - libcontainer container ab56b4383f0c29dfae60723cac1f1bce3d77b8a14640e0b8eb18498eeb2cec36. Jan 13 20:25:59.141073 systemd[1]: Started cri-containerd-e7724a6a274fff18a2513b63f3a8bfbb0ee4b0a274608eeb812eb210d839aef4.scope - libcontainer container e7724a6a274fff18a2513b63f3a8bfbb0ee4b0a274608eeb812eb210d839aef4. Jan 13 20:25:59.174727 containerd[1460]: time="2025-01-13T20:25:59.174682697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7724a6a274fff18a2513b63f3a8bfbb0ee4b0a274608eeb812eb210d839aef4\"" Jan 13 20:25:59.175879 kubelet[2119]: E0113 20:25:59.175850 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:25:59.177092 containerd[1460]: time="2025-01-13T20:25:59.177041039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"2befee54a3a738270f7b0ecda3ffbb0a80c93c40931c14d6dbb30e65b4122cde\"" Jan 13 20:25:59.179623 containerd[1460]: time="2025-01-13T20:25:59.178083340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:192d950ffdf1f6a4a9fda3f82e22a1fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab56b4383f0c29dfae60723cac1f1bce3d77b8a14640e0b8eb18498eeb2cec36\"" Jan 13 20:25:59.179733 kubelet[2119]: E0113 20:25:59.179588 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:25:59.180414 containerd[1460]: time="2025-01-13T20:25:59.180027251Z" level=info msg="CreateContainer within sandbox \"e7724a6a274fff18a2513b63f3a8bfbb0ee4b0a274608eeb812eb210d839aef4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:25:59.180500 kubelet[2119]: E0113 20:25:59.180080 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:25:59.181553 containerd[1460]: time="2025-01-13T20:25:59.181513434Z" level=info msg="CreateContainer within sandbox \"2befee54a3a738270f7b0ecda3ffbb0a80c93c40931c14d6dbb30e65b4122cde\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:25:59.183060 containerd[1460]: time="2025-01-13T20:25:59.183013583Z" level=info msg="CreateContainer within sandbox \"ab56b4383f0c29dfae60723cac1f1bce3d77b8a14640e0b8eb18498eeb2cec36\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:25:59.196112 containerd[1460]: time="2025-01-13T20:25:59.196036784Z" level=info msg="CreateContainer within sandbox \"e7724a6a274fff18a2513b63f3a8bfbb0ee4b0a274608eeb812eb210d839aef4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"17b5b93d5d3d3e65bbd87c902f52100e20462e55a7ba88b06bb449317b300c28\"" Jan 13 20:25:59.196592 containerd[1460]: time="2025-01-13T20:25:59.196566057Z" level=info msg="StartContainer for \"17b5b93d5d3d3e65bbd87c902f52100e20462e55a7ba88b06bb449317b300c28\"" Jan 13 20:25:59.199748 containerd[1460]: time="2025-01-13T20:25:59.199624095Z" level=info msg="CreateContainer within sandbox \"2befee54a3a738270f7b0ecda3ffbb0a80c93c40931c14d6dbb30e65b4122cde\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2c6bdd7430ff0f21b0bd253512dbbb6d591373ccb1707d350a2f692b3094eca9\"" Jan 13 20:25:59.200176 containerd[1460]: time="2025-01-13T20:25:59.200131921Z" level=info msg="StartContainer for \"2c6bdd7430ff0f21b0bd253512dbbb6d591373ccb1707d350a2f692b3094eca9\"" Jan 13 20:25:59.203197 containerd[1460]: time="2025-01-13T20:25:59.203158147Z" level=info msg="CreateContainer within sandbox \"ab56b4383f0c29dfae60723cac1f1bce3d77b8a14640e0b8eb18498eeb2cec36\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2c0550fb692b6c30b89586a7b8496dd14c70499e7430e2a50155af36572b6610\"" Jan 13 20:25:59.203646 containerd[1460]: time="2025-01-13T20:25:59.203626598Z" level=info msg="StartContainer for \"2c0550fb692b6c30b89586a7b8496dd14c70499e7430e2a50155af36572b6610\"" Jan 13 20:25:59.211378 kubelet[2119]: E0113 20:25:59.211341 2119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="1.6s" Jan 13 20:25:59.220281 systemd[1]: Started cri-containerd-17b5b93d5d3d3e65bbd87c902f52100e20462e55a7ba88b06bb449317b300c28.scope - libcontainer container 17b5b93d5d3d3e65bbd87c902f52100e20462e55a7ba88b06bb449317b300c28. Jan 13 20:25:59.223972 systemd[1]: Started cri-containerd-2c6bdd7430ff0f21b0bd253512dbbb6d591373ccb1707d350a2f692b3094eca9.scope - libcontainer container 2c6bdd7430ff0f21b0bd253512dbbb6d591373ccb1707d350a2f692b3094eca9. Jan 13 20:25:59.228285 systemd[1]: Started cri-containerd-2c0550fb692b6c30b89586a7b8496dd14c70499e7430e2a50155af36572b6610.scope - libcontainer container 2c0550fb692b6c30b89586a7b8496dd14c70499e7430e2a50155af36572b6610. Jan 13 20:25:59.254034 containerd[1460]: time="2025-01-13T20:25:59.253988489Z" level=info msg="StartContainer for \"17b5b93d5d3d3e65bbd87c902f52100e20462e55a7ba88b06bb449317b300c28\" returns successfully" Jan 13 20:25:59.274817 containerd[1460]: time="2025-01-13T20:25:59.271541146Z" level=info msg="StartContainer for \"2c6bdd7430ff0f21b0bd253512dbbb6d591373ccb1707d350a2f692b3094eca9\" returns successfully" Jan 13 20:25:59.274817 containerd[1460]: time="2025-01-13T20:25:59.271631379Z" level=info msg="StartContainer for \"2c0550fb692b6c30b89586a7b8496dd14c70499e7430e2a50155af36572b6610\" returns successfully" Jan 13 20:25:59.467139 kubelet[2119]: I0113 20:25:59.467106 2119 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:25:59.839773 kubelet[2119]: E0113 20:25:59.839721 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:25:59.840888 kubelet[2119]: E0113 20:25:59.840855 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:25:59.841675 kubelet[2119]: E0113 20:25:59.841640 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:00.844134 kubelet[2119]: E0113 20:26:00.844029 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:00.858151 kubelet[2119]: E0113 20:26:00.858111 2119 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 20:26:00.947924 kubelet[2119]: I0113 20:26:00.947880 2119 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 20:26:00.947924 kubelet[2119]: E0113 20:26:00.947921 2119 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 13 20:26:00.956158 kubelet[2119]: E0113 20:26:00.956126 2119 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:26:01.056243 kubelet[2119]: E0113 20:26:01.056195 2119 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:26:01.791908 kubelet[2119]: I0113 20:26:01.791882 2119 apiserver.go:52] "Watching apiserver" Jan 13 20:26:01.805211 kubelet[2119]: I0113 20:26:01.805189 2119 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:26:02.930309 systemd[1]: Reloading requested from client PID 2397 ('systemctl') (unit session-5.scope)... Jan 13 20:26:02.930325 systemd[1]: Reloading... Jan 13 20:26:02.990755 zram_generator::config[2439]: No configuration found. Jan 13 20:26:03.114767 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:26:03.177972 systemd[1]: Reloading finished in 247 ms. Jan 13 20:26:03.213265 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:26:03.230640 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:26:03.230983 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:26:03.231038 systemd[1]: kubelet.service: Consumed 1.209s CPU time, 119.3M memory peak, 0B memory swap peak. Jan 13 20:26:03.238975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:26:03.327735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:26:03.334152 (kubelet)[2478]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:26:03.368178 kubelet[2478]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:26:03.368178 kubelet[2478]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:26:03.368178 kubelet[2478]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:26:03.368534 kubelet[2478]: I0113 20:26:03.368229 2478 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:26:03.373284 kubelet[2478]: I0113 20:26:03.372983 2478 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:26:03.373284 kubelet[2478]: I0113 20:26:03.373008 2478 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:26:03.373284 kubelet[2478]: I0113 20:26:03.373250 2478 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:26:03.375026 kubelet[2478]: I0113 20:26:03.374993 2478 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:26:03.379101 kubelet[2478]: I0113 20:26:03.379069 2478 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:26:03.381969 kubelet[2478]: E0113 20:26:03.381943 2478 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:26:03.381969 kubelet[2478]: I0113 20:26:03.381968 2478 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:26:03.384067 kubelet[2478]: I0113 20:26:03.384040 2478 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:26:03.384158 kubelet[2478]: I0113 20:26:03.384136 2478 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:26:03.384270 kubelet[2478]: I0113 20:26:03.384237 2478 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:26:03.384414 kubelet[2478]: I0113 20:26:03.384258 2478 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:26:03.384489 kubelet[2478]: I0113 20:26:03.384417 2478 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:26:03.384489 kubelet[2478]: I0113 20:26:03.384426 2478 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:26:03.384489 kubelet[2478]: I0113 20:26:03.384455 2478 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:26:03.384555 kubelet[2478]: I0113 20:26:03.384542 2478 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:26:03.384580 kubelet[2478]: I0113 20:26:03.384557 2478 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:26:03.384580 kubelet[2478]: I0113 20:26:03.384576 2478 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:26:03.384625 kubelet[2478]: I0113 20:26:03.384586 2478 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:26:03.385159 kubelet[2478]: I0113 20:26:03.385026 2478 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:26:03.385607 kubelet[2478]: I0113 20:26:03.385460 2478 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:26:03.385848 kubelet[2478]: I0113 20:26:03.385819 2478 server.go:1269] "Started kubelet" Jan 13 20:26:03.386300 kubelet[2478]: I0113 20:26:03.386242 2478 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:26:03.386573 kubelet[2478]: I0113 20:26:03.386555 2478 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:26:03.386718 kubelet[2478]: I0113 20:26:03.386691 2478 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:26:03.387055 kubelet[2478]: I0113 20:26:03.387022 2478 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:26:03.388790 kubelet[2478]: I0113 20:26:03.388760 2478 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:26:03.393567 kubelet[2478]: I0113 20:26:03.390092 2478 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:26:03.393567 kubelet[2478]: E0113 20:26:03.390202 2478 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:26:03.393567 kubelet[2478]: I0113 20:26:03.391888 2478 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:26:03.393567 kubelet[2478]: I0113 20:26:03.392051 2478 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:26:03.398681 kubelet[2478]: I0113 20:26:03.395334 2478 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:26:03.399804 kubelet[2478]: I0113 20:26:03.399418 2478 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:26:03.399804 kubelet[2478]: I0113 20:26:03.399514 2478 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:26:03.405969 kubelet[2478]: I0113 20:26:03.405944 2478 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:26:03.417332 kubelet[2478]: I0113 20:26:03.417185 2478 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:26:03.418286 kubelet[2478]: I0113 20:26:03.418074 2478 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:26:03.418286 kubelet[2478]: I0113 20:26:03.418093 2478 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:26:03.418286 kubelet[2478]: I0113 20:26:03.418115 2478 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:26:03.418286 kubelet[2478]: E0113 20:26:03.418158 2478 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:26:03.441770 kubelet[2478]: I0113 20:26:03.441736 2478 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:26:03.441770 kubelet[2478]: I0113 20:26:03.441760 2478 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:26:03.441770 kubelet[2478]: I0113 20:26:03.441781 2478 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:26:03.441938 kubelet[2478]: I0113 20:26:03.441920 2478 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:26:03.441968 kubelet[2478]: I0113 20:26:03.441936 2478 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:26:03.441968 kubelet[2478]: I0113 20:26:03.441954 2478 policy_none.go:49] "None policy: Start" Jan 13 20:26:03.442513 kubelet[2478]: I0113 20:26:03.442487 2478 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:26:03.442513 kubelet[2478]: I0113 20:26:03.442513 2478 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:26:03.442671 kubelet[2478]: I0113 20:26:03.442649 2478 state_mem.go:75] "Updated machine memory state" Jan 13 20:26:03.448318 kubelet[2478]: I0113 20:26:03.448188 2478 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:26:03.448667 kubelet[2478]: I0113 20:26:03.448363 2478 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:26:03.448667 kubelet[2478]: I0113 20:26:03.448375 2478 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:26:03.448667 kubelet[2478]: I0113 20:26:03.448595 2478 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:26:03.553135 kubelet[2478]: I0113 20:26:03.553099 2478 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:26:03.559226 kubelet[2478]: I0113 20:26:03.559194 2478 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 13 20:26:03.559335 kubelet[2478]: I0113 20:26:03.559277 2478 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 20:26:03.693791 kubelet[2478]: I0113 20:26:03.693742 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/192d950ffdf1f6a4a9fda3f82e22a1fb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"192d950ffdf1f6a4a9fda3f82e22a1fb\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:26:03.693791 kubelet[2478]: I0113 20:26:03.693789 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:26:03.694040 kubelet[2478]: I0113 20:26:03.693809 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:26:03.694040 kubelet[2478]: I0113 20:26:03.693852 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:26:03.694040 kubelet[2478]: I0113 20:26:03.693870 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/192d950ffdf1f6a4a9fda3f82e22a1fb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"192d950ffdf1f6a4a9fda3f82e22a1fb\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:26:03.694040 kubelet[2478]: I0113 20:26:03.693888 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/192d950ffdf1f6a4a9fda3f82e22a1fb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"192d950ffdf1f6a4a9fda3f82e22a1fb\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:26:03.694040 kubelet[2478]: I0113 20:26:03.693905 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:26:03.694154 kubelet[2478]: I0113 20:26:03.693924 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:26:03.694154 kubelet[2478]: I0113 20:26:03.693939 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:26:03.833023 kubelet[2478]: E0113 20:26:03.832862 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:03.835026 kubelet[2478]: E0113 20:26:03.834949 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:03.835026 kubelet[2478]: E0113 20:26:03.834990 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:04.385836 kubelet[2478]: I0113 20:26:04.385797 2478 apiserver.go:52] "Watching apiserver" Jan 13 20:26:04.392585 kubelet[2478]: I0113 20:26:04.392547 2478 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:26:04.430757 kubelet[2478]: E0113 20:26:04.430713 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:04.431347 kubelet[2478]: E0113 20:26:04.431321 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:04.437419 kubelet[2478]: E0113 20:26:04.437119 2478 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 20:26:04.437419 kubelet[2478]: E0113 20:26:04.437263 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:04.447530 kubelet[2478]: I0113 20:26:04.447414 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.447398888 podStartE2EDuration="1.447398888s" podCreationTimestamp="2025-01-13 20:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:26:04.447385043 +0000 UTC m=+1.110461495" watchObservedRunningTime="2025-01-13 20:26:04.447398888 +0000 UTC m=+1.110475380" Jan 13 20:26:04.465129 kubelet[2478]: I0113 20:26:04.465018 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.464996217 podStartE2EDuration="1.464996217s" podCreationTimestamp="2025-01-13 20:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:26:04.454230539 +0000 UTC m=+1.117307031" watchObservedRunningTime="2025-01-13 20:26:04.464996217 +0000 UTC m=+1.128072709" Jan 13 20:26:04.473226 kubelet[2478]: I0113 20:26:04.472830 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.472814095 podStartE2EDuration="1.472814095s" podCreationTimestamp="2025-01-13 20:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:26:04.465095488 +0000 UTC m=+1.128171980" watchObservedRunningTime="2025-01-13 20:26:04.472814095 +0000 UTC m=+1.135890588" Jan 13 20:26:04.821965 sudo[1599]: pam_unix(sudo:session): session closed for user root Jan 13 20:26:04.823378 sshd[1598]: Connection closed by 10.0.0.1 port 58818 Jan 13 20:26:04.823921 sshd-session[1596]: pam_unix(sshd:session): session closed for user core Jan 13 20:26:04.827135 systemd[1]: sshd@4-10.0.0.128:22-10.0.0.1:58818.service: Deactivated successfully. Jan 13 20:26:04.828867 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:26:04.829042 systemd[1]: session-5.scope: Consumed 5.667s CPU time, 156.2M memory peak, 0B memory swap peak. Jan 13 20:26:04.830237 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:26:04.832044 systemd-logind[1443]: Removed session 5. Jan 13 20:26:05.432592 kubelet[2478]: E0113 20:26:05.431877 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:05.432592 kubelet[2478]: E0113 20:26:05.431945 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:09.397117 kubelet[2478]: E0113 20:26:09.397075 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:09.440615 kubelet[2478]: E0113 20:26:09.440447 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:09.796354 kubelet[2478]: I0113 20:26:09.796326 2478 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:26:09.796687 containerd[1460]: time="2025-01-13T20:26:09.796641692Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:26:09.797115 kubelet[2478]: I0113 20:26:09.797094 2478 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:26:10.841364 kubelet[2478]: I0113 20:26:10.841323 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/733153eb-9897-4c69-aa89-07d24eadef60-xtables-lock\") pod \"kube-proxy-d4s5q\" (UID: \"733153eb-9897-4c69-aa89-07d24eadef60\") " pod="kube-system/kube-proxy-d4s5q" Jan 13 20:26:10.841364 kubelet[2478]: I0113 20:26:10.841364 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/733153eb-9897-4c69-aa89-07d24eadef60-lib-modules\") pod \"kube-proxy-d4s5q\" (UID: \"733153eb-9897-4c69-aa89-07d24eadef60\") " pod="kube-system/kube-proxy-d4s5q" Jan 13 20:26:10.841716 kubelet[2478]: I0113 20:26:10.841383 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/0543332d-05a8-434a-8b53-ce134eae6806-cni-plugin\") pod \"kube-flannel-ds-s7b4v\" (UID: \"0543332d-05a8-434a-8b53-ce134eae6806\") " pod="kube-flannel/kube-flannel-ds-s7b4v" Jan 13 20:26:10.841716 kubelet[2478]: I0113 20:26:10.841408 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/0543332d-05a8-434a-8b53-ce134eae6806-cni\") pod \"kube-flannel-ds-s7b4v\" (UID: \"0543332d-05a8-434a-8b53-ce134eae6806\") " pod="kube-flannel/kube-flannel-ds-s7b4v" Jan 13 20:26:10.841716 kubelet[2478]: I0113 20:26:10.841430 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0543332d-05a8-434a-8b53-ce134eae6806-xtables-lock\") pod \"kube-flannel-ds-s7b4v\" (UID: \"0543332d-05a8-434a-8b53-ce134eae6806\") " pod="kube-flannel/kube-flannel-ds-s7b4v" Jan 13 20:26:10.841716 kubelet[2478]: I0113 20:26:10.841447 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0543332d-05a8-434a-8b53-ce134eae6806-run\") pod \"kube-flannel-ds-s7b4v\" (UID: \"0543332d-05a8-434a-8b53-ce134eae6806\") " pod="kube-flannel/kube-flannel-ds-s7b4v" Jan 13 20:26:10.841716 kubelet[2478]: I0113 20:26:10.841463 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsfv8\" (UniqueName: \"kubernetes.io/projected/0543332d-05a8-434a-8b53-ce134eae6806-kube-api-access-xsfv8\") pod \"kube-flannel-ds-s7b4v\" (UID: \"0543332d-05a8-434a-8b53-ce134eae6806\") " pod="kube-flannel/kube-flannel-ds-s7b4v" Jan 13 20:26:10.841835 kubelet[2478]: I0113 20:26:10.841483 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/733153eb-9897-4c69-aa89-07d24eadef60-kube-proxy\") pod \"kube-proxy-d4s5q\" (UID: \"733153eb-9897-4c69-aa89-07d24eadef60\") " pod="kube-system/kube-proxy-d4s5q" Jan 13 20:26:10.841835 kubelet[2478]: I0113 20:26:10.841498 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvf5k\" (UniqueName: \"kubernetes.io/projected/733153eb-9897-4c69-aa89-07d24eadef60-kube-api-access-pvf5k\") pod \"kube-proxy-d4s5q\" (UID: \"733153eb-9897-4c69-aa89-07d24eadef60\") " pod="kube-system/kube-proxy-d4s5q" Jan 13 20:26:10.841835 kubelet[2478]: I0113 20:26:10.841514 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/0543332d-05a8-434a-8b53-ce134eae6806-flannel-cfg\") pod \"kube-flannel-ds-s7b4v\" (UID: \"0543332d-05a8-434a-8b53-ce134eae6806\") " pod="kube-flannel/kube-flannel-ds-s7b4v" Jan 13 20:26:10.842321 systemd[1]: Created slice kubepods-besteffort-pod733153eb_9897_4c69_aa89_07d24eadef60.slice - libcontainer container kubepods-besteffort-pod733153eb_9897_4c69_aa89_07d24eadef60.slice. Jan 13 20:26:10.853539 systemd[1]: Created slice kubepods-burstable-pod0543332d_05a8_434a_8b53_ce134eae6806.slice - libcontainer container kubepods-burstable-pod0543332d_05a8_434a_8b53_ce134eae6806.slice. Jan 13 20:26:11.150544 kubelet[2478]: E0113 20:26:11.150423 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:11.151335 containerd[1460]: time="2025-01-13T20:26:11.151263394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d4s5q,Uid:733153eb-9897-4c69-aa89-07d24eadef60,Namespace:kube-system,Attempt:0,}" Jan 13 20:26:11.155765 kubelet[2478]: E0113 20:26:11.155739 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:11.156167 containerd[1460]: time="2025-01-13T20:26:11.156113966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-s7b4v,Uid:0543332d-05a8-434a-8b53-ce134eae6806,Namespace:kube-flannel,Attempt:0,}" Jan 13 20:26:11.168838 containerd[1460]: time="2025-01-13T20:26:11.168614568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:26:11.168838 containerd[1460]: time="2025-01-13T20:26:11.168684465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:26:11.168838 containerd[1460]: time="2025-01-13T20:26:11.168697429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:26:11.169279 containerd[1460]: time="2025-01-13T20:26:11.168775528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:26:11.181505 containerd[1460]: time="2025-01-13T20:26:11.181261967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:26:11.181505 containerd[1460]: time="2025-01-13T20:26:11.181324503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:26:11.181505 containerd[1460]: time="2025-01-13T20:26:11.181349869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:26:11.181674 containerd[1460]: time="2025-01-13T20:26:11.181453535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:26:11.191825 systemd[1]: Started cri-containerd-0d2264fed8465d15d07343ba4e180e91b0324f4b137645410ebbe14b61c95300.scope - libcontainer container 0d2264fed8465d15d07343ba4e180e91b0324f4b137645410ebbe14b61c95300. Jan 13 20:26:11.194446 systemd[1]: Started cri-containerd-5cb90b7a23deb435a6ec76de5ed69e87ba3822d77e282fd27a50c82d9fe5b789.scope - libcontainer container 5cb90b7a23deb435a6ec76de5ed69e87ba3822d77e282fd27a50c82d9fe5b789. Jan 13 20:26:11.211435 containerd[1460]: time="2025-01-13T20:26:11.211394294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d4s5q,Uid:733153eb-9897-4c69-aa89-07d24eadef60,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d2264fed8465d15d07343ba4e180e91b0324f4b137645410ebbe14b61c95300\"" Jan 13 20:26:11.212060 kubelet[2478]: E0113 20:26:11.212035 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:11.214562 containerd[1460]: time="2025-01-13T20:26:11.214387281Z" level=info msg="CreateContainer within sandbox \"0d2264fed8465d15d07343ba4e180e91b0324f4b137645410ebbe14b61c95300\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:26:11.225926 containerd[1460]: time="2025-01-13T20:26:11.225876591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-s7b4v,Uid:0543332d-05a8-434a-8b53-ce134eae6806,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"5cb90b7a23deb435a6ec76de5ed69e87ba3822d77e282fd27a50c82d9fe5b789\"" Jan 13 20:26:11.226568 kubelet[2478]: E0113 20:26:11.226544 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:11.229174 containerd[1460]: time="2025-01-13T20:26:11.229113320Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 13 20:26:11.235172 containerd[1460]: time="2025-01-13T20:26:11.235140825Z" level=info msg="CreateContainer within sandbox \"0d2264fed8465d15d07343ba4e180e91b0324f4b137645410ebbe14b61c95300\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a199d609f01e03b26d8ddc581a509272075f9381222c6bcd73a1bf15625192be\"" Jan 13 20:26:11.235984 containerd[1460]: time="2025-01-13T20:26:11.235932623Z" level=info msg="StartContainer for \"a199d609f01e03b26d8ddc581a509272075f9381222c6bcd73a1bf15625192be\"" Jan 13 20:26:11.259804 systemd[1]: Started cri-containerd-a199d609f01e03b26d8ddc581a509272075f9381222c6bcd73a1bf15625192be.scope - libcontainer container a199d609f01e03b26d8ddc581a509272075f9381222c6bcd73a1bf15625192be. Jan 13 20:26:11.284408 containerd[1460]: time="2025-01-13T20:26:11.284368401Z" level=info msg="StartContainer for \"a199d609f01e03b26d8ddc581a509272075f9381222c6bcd73a1bf15625192be\" returns successfully" Jan 13 20:26:11.445553 kubelet[2478]: E0113 20:26:11.445447 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:11.460500 kubelet[2478]: I0113 20:26:11.458068 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d4s5q" podStartSLOduration=1.458046704 podStartE2EDuration="1.458046704s" podCreationTimestamp="2025-01-13 20:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:26:11.45731272 +0000 UTC m=+8.120389212" watchObservedRunningTime="2025-01-13 20:26:11.458046704 +0000 UTC m=+8.121123196" Jan 13 20:26:12.210580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1777180313.mount: Deactivated successfully. Jan 13 20:26:12.236485 containerd[1460]: time="2025-01-13T20:26:12.236445375Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:26:12.237587 containerd[1460]: time="2025-01-13T20:26:12.237528917Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Jan 13 20:26:12.238494 containerd[1460]: time="2025-01-13T20:26:12.238447740Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:26:12.240698 containerd[1460]: time="2025-01-13T20:26:12.240666437Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:26:12.241977 containerd[1460]: time="2025-01-13T20:26:12.241769624Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.012620255s" Jan 13 20:26:12.241977 containerd[1460]: time="2025-01-13T20:26:12.241843521Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Jan 13 20:26:12.244332 containerd[1460]: time="2025-01-13T20:26:12.244308878Z" level=info msg="CreateContainer within sandbox \"5cb90b7a23deb435a6ec76de5ed69e87ba3822d77e282fd27a50c82d9fe5b789\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 13 20:26:12.253604 containerd[1460]: time="2025-01-13T20:26:12.253574920Z" level=info msg="CreateContainer within sandbox \"5cb90b7a23deb435a6ec76de5ed69e87ba3822d77e282fd27a50c82d9fe5b789\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"97aa1083f293898461c4f6cf19b521c24a80a9af8f42bc68dd66a94993ae8c48\"" Jan 13 20:26:12.254599 containerd[1460]: time="2025-01-13T20:26:12.253914242Z" level=info msg="StartContainer for \"97aa1083f293898461c4f6cf19b521c24a80a9af8f42bc68dd66a94993ae8c48\"" Jan 13 20:26:12.277820 systemd[1]: Started cri-containerd-97aa1083f293898461c4f6cf19b521c24a80a9af8f42bc68dd66a94993ae8c48.scope - libcontainer container 97aa1083f293898461c4f6cf19b521c24a80a9af8f42bc68dd66a94993ae8c48. Jan 13 20:26:12.299856 containerd[1460]: time="2025-01-13T20:26:12.299818070Z" level=info msg="StartContainer for \"97aa1083f293898461c4f6cf19b521c24a80a9af8f42bc68dd66a94993ae8c48\" returns successfully" Jan 13 20:26:12.304623 systemd[1]: cri-containerd-97aa1083f293898461c4f6cf19b521c24a80a9af8f42bc68dd66a94993ae8c48.scope: Deactivated successfully. Jan 13 20:26:12.342692 containerd[1460]: time="2025-01-13T20:26:12.342619067Z" level=info msg="shim disconnected" id=97aa1083f293898461c4f6cf19b521c24a80a9af8f42bc68dd66a94993ae8c48 namespace=k8s.io Jan 13 20:26:12.342692 containerd[1460]: time="2025-01-13T20:26:12.342687364Z" level=warning msg="cleaning up after shim disconnected" id=97aa1083f293898461c4f6cf19b521c24a80a9af8f42bc68dd66a94993ae8c48 namespace=k8s.io Jan 13 20:26:12.342692 containerd[1460]: time="2025-01-13T20:26:12.342696686Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:26:12.448391 kubelet[2478]: E0113 20:26:12.448311 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:12.449507 containerd[1460]: time="2025-01-13T20:26:12.449219622Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 13 20:26:12.643790 kubelet[2478]: E0113 20:26:12.643707 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:12.964168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97aa1083f293898461c4f6cf19b521c24a80a9af8f42bc68dd66a94993ae8c48-rootfs.mount: Deactivated successfully. Jan 13 20:26:13.451180 kubelet[2478]: E0113 20:26:13.451089 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:13.512913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount532471462.mount: Deactivated successfully. Jan 13 20:26:13.951509 containerd[1460]: time="2025-01-13T20:26:13.950879255Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:26:13.951840 containerd[1460]: time="2025-01-13T20:26:13.951496960Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Jan 13 20:26:13.952732 containerd[1460]: time="2025-01-13T20:26:13.952702642Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:26:13.955741 containerd[1460]: time="2025-01-13T20:26:13.955708387Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:26:13.957152 containerd[1460]: time="2025-01-13T20:26:13.957114676Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.507853644s" Jan 13 20:26:13.957706 containerd[1460]: time="2025-01-13T20:26:13.957681009Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Jan 13 20:26:13.960154 containerd[1460]: time="2025-01-13T20:26:13.960026119Z" level=info msg="CreateContainer within sandbox \"5cb90b7a23deb435a6ec76de5ed69e87ba3822d77e282fd27a50c82d9fe5b789\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:26:13.971626 containerd[1460]: time="2025-01-13T20:26:13.971585149Z" level=info msg="CreateContainer within sandbox \"5cb90b7a23deb435a6ec76de5ed69e87ba3822d77e282fd27a50c82d9fe5b789\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b813fa43b2af44d1fda065dbc8d883c447b58581aabf487006c5b6cbc21c47b2\"" Jan 13 20:26:13.972305 containerd[1460]: time="2025-01-13T20:26:13.971987003Z" level=info msg="StartContainer for \"b813fa43b2af44d1fda065dbc8d883c447b58581aabf487006c5b6cbc21c47b2\"" Jan 13 20:26:13.994818 systemd[1]: Started cri-containerd-b813fa43b2af44d1fda065dbc8d883c447b58581aabf487006c5b6cbc21c47b2.scope - libcontainer container b813fa43b2af44d1fda065dbc8d883c447b58581aabf487006c5b6cbc21c47b2. Jan 13 20:26:14.016515 containerd[1460]: time="2025-01-13T20:26:14.015373986Z" level=info msg="StartContainer for \"b813fa43b2af44d1fda065dbc8d883c447b58581aabf487006c5b6cbc21c47b2\" returns successfully" Jan 13 20:26:14.019343 systemd[1]: cri-containerd-b813fa43b2af44d1fda065dbc8d883c447b58581aabf487006c5b6cbc21c47b2.scope: Deactivated successfully. Jan 13 20:26:14.088163 kubelet[2478]: I0113 20:26:14.088129 2478 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 20:26:14.132231 containerd[1460]: time="2025-01-13T20:26:14.132050883Z" level=info msg="shim disconnected" id=b813fa43b2af44d1fda065dbc8d883c447b58581aabf487006c5b6cbc21c47b2 namespace=k8s.io Jan 13 20:26:14.132231 containerd[1460]: time="2025-01-13T20:26:14.132103895Z" level=warning msg="cleaning up after shim disconnected" id=b813fa43b2af44d1fda065dbc8d883c447b58581aabf487006c5b6cbc21c47b2 namespace=k8s.io Jan 13 20:26:14.132231 containerd[1460]: time="2025-01-13T20:26:14.132112257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:26:14.144163 systemd[1]: Created slice kubepods-burstable-pod5c38aa9b_4164_40c3_a88c_765f62adf824.slice - libcontainer container kubepods-burstable-pod5c38aa9b_4164_40c3_a88c_765f62adf824.slice. Jan 13 20:26:14.151258 systemd[1]: Created slice kubepods-burstable-pod05d5274c_bac5_4e65_b098_e4b0aa3ca4c6.slice - libcontainer container kubepods-burstable-pod05d5274c_bac5_4e65_b098_e4b0aa3ca4c6.slice. Jan 13 20:26:14.161034 kubelet[2478]: I0113 20:26:14.161003 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05d5274c-bac5-4e65-b098-e4b0aa3ca4c6-config-volume\") pod \"coredns-6f6b679f8f-7q64p\" (UID: \"05d5274c-bac5-4e65-b098-e4b0aa3ca4c6\") " pod="kube-system/coredns-6f6b679f8f-7q64p" Jan 13 20:26:14.161308 kubelet[2478]: I0113 20:26:14.161221 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c38aa9b-4164-40c3-a88c-765f62adf824-config-volume\") pod \"coredns-6f6b679f8f-frdxq\" (UID: \"5c38aa9b-4164-40c3-a88c-765f62adf824\") " pod="kube-system/coredns-6f6b679f8f-frdxq" Jan 13 20:26:14.161308 kubelet[2478]: I0113 20:26:14.161246 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxktt\" (UniqueName: \"kubernetes.io/projected/05d5274c-bac5-4e65-b098-e4b0aa3ca4c6-kube-api-access-hxktt\") pod \"coredns-6f6b679f8f-7q64p\" (UID: \"05d5274c-bac5-4e65-b098-e4b0aa3ca4c6\") " pod="kube-system/coredns-6f6b679f8f-7q64p" Jan 13 20:26:14.161308 kubelet[2478]: I0113 20:26:14.161265 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc5jw\" (UniqueName: \"kubernetes.io/projected/5c38aa9b-4164-40c3-a88c-765f62adf824-kube-api-access-kc5jw\") pod \"coredns-6f6b679f8f-frdxq\" (UID: \"5c38aa9b-4164-40c3-a88c-765f62adf824\") " pod="kube-system/coredns-6f6b679f8f-frdxq" Jan 13 20:26:14.448367 kubelet[2478]: E0113 20:26:14.448263 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:14.448825 containerd[1460]: time="2025-01-13T20:26:14.448782171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-frdxq,Uid:5c38aa9b-4164-40c3-a88c-765f62adf824,Namespace:kube-system,Attempt:0,}" Jan 13 20:26:14.453505 kubelet[2478]: E0113 20:26:14.453476 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:14.453889 kubelet[2478]: E0113 20:26:14.453835 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:14.454415 containerd[1460]: time="2025-01-13T20:26:14.454387364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7q64p,Uid:05d5274c-bac5-4e65-b098-e4b0aa3ca4c6,Namespace:kube-system,Attempt:0,}" Jan 13 20:26:14.455647 containerd[1460]: time="2025-01-13T20:26:14.455609882Z" level=info msg="CreateContainer within sandbox \"5cb90b7a23deb435a6ec76de5ed69e87ba3822d77e282fd27a50c82d9fe5b789\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 13 20:26:14.494437 containerd[1460]: time="2025-01-13T20:26:14.494393529Z" level=info msg="CreateContainer within sandbox \"5cb90b7a23deb435a6ec76de5ed69e87ba3822d77e282fd27a50c82d9fe5b789\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"9e3edf273905a7f48384572732c91e1750517ac9adc65ddffc788d6e64f35d95\"" Jan 13 20:26:14.495517 containerd[1460]: time="2025-01-13T20:26:14.495086327Z" level=info msg="StartContainer for \"9e3edf273905a7f48384572732c91e1750517ac9adc65ddffc788d6e64f35d95\"" Jan 13 20:26:14.516364 containerd[1460]: time="2025-01-13T20:26:14.516322069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7q64p,Uid:05d5274c-bac5-4e65-b098-e4b0aa3ca4c6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1b05e9b735919788989f2cba5ac44f01192bfb5de3a06b4cb047b5353fe9a93e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:26:14.516919 kubelet[2478]: E0113 20:26:14.516881 2478 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b05e9b735919788989f2cba5ac44f01192bfb5de3a06b4cb047b5353fe9a93e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:26:14.517023 kubelet[2478]: E0113 20:26:14.516995 2478 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b05e9b735919788989f2cba5ac44f01192bfb5de3a06b4cb047b5353fe9a93e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-7q64p" Jan 13 20:26:14.517056 kubelet[2478]: E0113 20:26:14.517017 2478 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b05e9b735919788989f2cba5ac44f01192bfb5de3a06b4cb047b5353fe9a93e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-7q64p" Jan 13 20:26:14.517123 kubelet[2478]: E0113 20:26:14.517057 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7q64p_kube-system(05d5274c-bac5-4e65-b098-e4b0aa3ca4c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7q64p_kube-system(05d5274c-bac5-4e65-b098-e4b0aa3ca4c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b05e9b735919788989f2cba5ac44f01192bfb5de3a06b4cb047b5353fe9a93e\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-7q64p" podUID="05d5274c-bac5-4e65-b098-e4b0aa3ca4c6" Jan 13 20:26:14.518535 containerd[1460]: time="2025-01-13T20:26:14.518449072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-frdxq,Uid:5c38aa9b-4164-40c3-a88c-765f62adf824,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9f344f7a906d979d96c9f3173953b015db0df48fc6873daadcc3807fa1272e6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:26:14.519055 kubelet[2478]: E0113 20:26:14.518645 2478 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f344f7a906d979d96c9f3173953b015db0df48fc6873daadcc3807fa1272e6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:26:14.519055 kubelet[2478]: E0113 20:26:14.518699 2478 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f344f7a906d979d96c9f3173953b015db0df48fc6873daadcc3807fa1272e6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-frdxq" Jan 13 20:26:14.519055 kubelet[2478]: E0113 20:26:14.518718 2478 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f344f7a906d979d96c9f3173953b015db0df48fc6873daadcc3807fa1272e6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-frdxq" Jan 13 20:26:14.519055 kubelet[2478]: E0113 20:26:14.518749 2478 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-frdxq_kube-system(5c38aa9b-4164-40c3-a88c-765f62adf824)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-frdxq_kube-system(5c38aa9b-4164-40c3-a88c-765f62adf824)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9f344f7a906d979d96c9f3173953b015db0df48fc6873daadcc3807fa1272e6\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-frdxq" podUID="5c38aa9b-4164-40c3-a88c-765f62adf824" Jan 13 20:26:14.524828 systemd[1]: Started cri-containerd-9e3edf273905a7f48384572732c91e1750517ac9adc65ddffc788d6e64f35d95.scope - libcontainer container 9e3edf273905a7f48384572732c91e1750517ac9adc65ddffc788d6e64f35d95. Jan 13 20:26:14.549049 containerd[1460]: time="2025-01-13T20:26:14.548891546Z" level=info msg="StartContainer for \"9e3edf273905a7f48384572732c91e1750517ac9adc65ddffc788d6e64f35d95\" returns successfully" Jan 13 20:26:14.970752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b813fa43b2af44d1fda065dbc8d883c447b58581aabf487006c5b6cbc21c47b2-rootfs.mount: Deactivated successfully. Jan 13 20:26:15.187830 kubelet[2478]: E0113 20:26:15.187791 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:15.457450 kubelet[2478]: E0113 20:26:15.456830 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:15.470066 kubelet[2478]: I0113 20:26:15.469958 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-s7b4v" podStartSLOduration=2.740263801 podStartE2EDuration="5.469941037s" podCreationTimestamp="2025-01-13 20:26:10 +0000 UTC" firstStartedPulling="2025-01-13 20:26:11.228704938 +0000 UTC m=+7.891781430" lastFinishedPulling="2025-01-13 20:26:13.958382214 +0000 UTC m=+10.621458666" observedRunningTime="2025-01-13 20:26:15.469110895 +0000 UTC m=+12.132187347" watchObservedRunningTime="2025-01-13 20:26:15.469941037 +0000 UTC m=+12.133017489" Jan 13 20:26:15.627004 systemd-networkd[1383]: flannel.1: Link UP Jan 13 20:26:15.627011 systemd-networkd[1383]: flannel.1: Gained carrier Jan 13 20:26:16.458284 kubelet[2478]: E0113 20:26:16.458172 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:17.298870 systemd-networkd[1383]: flannel.1: Gained IPv6LL Jan 13 20:26:19.093348 update_engine[1446]: I20250113 20:26:19.093254 1446 update_attempter.cc:509] Updating boot flags... Jan 13 20:26:19.119708 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3130) Jan 13 20:26:19.146724 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3133) Jan 13 20:26:27.418931 kubelet[2478]: E0113 20:26:27.418888 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:27.419340 containerd[1460]: time="2025-01-13T20:26:27.419304386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-frdxq,Uid:5c38aa9b-4164-40c3-a88c-765f62adf824,Namespace:kube-system,Attempt:0,}" Jan 13 20:26:27.463483 systemd-networkd[1383]: cni0: Link UP Jan 13 20:26:27.463492 systemd-networkd[1383]: cni0: Gained carrier Jan 13 20:26:27.466468 systemd-networkd[1383]: cni0: Lost carrier Jan 13 20:26:27.467893 systemd-networkd[1383]: veth7a220076: Link UP Jan 13 20:26:27.471118 kernel: cni0: port 1(veth7a220076) entered blocking state Jan 13 20:26:27.471177 kernel: cni0: port 1(veth7a220076) entered disabled state Jan 13 20:26:27.471205 kernel: veth7a220076: entered allmulticast mode Jan 13 20:26:27.471220 kernel: veth7a220076: entered promiscuous mode Jan 13 20:26:27.471238 kernel: cni0: port 1(veth7a220076) entered blocking state Jan 13 20:26:27.471251 kernel: cni0: port 1(veth7a220076) entered forwarding state Jan 13 20:26:27.471913 kernel: cni0: port 1(veth7a220076) entered disabled state Jan 13 20:26:27.484269 kernel: cni0: port 1(veth7a220076) entered blocking state Jan 13 20:26:27.484352 kernel: cni0: port 1(veth7a220076) entered forwarding state Jan 13 20:26:27.484559 systemd-networkd[1383]: veth7a220076: Gained carrier Jan 13 20:26:27.486543 systemd-networkd[1383]: cni0: Gained carrier Jan 13 20:26:27.488491 containerd[1460]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000016938), "name":"cbr0", "type":"bridge"} Jan 13 20:26:27.488491 containerd[1460]: delegateAdd: netconf sent to delegate plugin: Jan 13 20:26:27.505774 containerd[1460]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-13T20:26:27.505642284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:26:27.505774 containerd[1460]: time="2025-01-13T20:26:27.505726376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:26:27.505774 containerd[1460]: time="2025-01-13T20:26:27.505739458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:26:27.505976 containerd[1460]: time="2025-01-13T20:26:27.505818710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:26:27.525862 systemd[1]: Started cri-containerd-0c1a02d98e3091e5d58e89dfe2e7e5124b32f8f75b71a1dd5cce0f805077dd71.scope - libcontainer container 0c1a02d98e3091e5d58e89dfe2e7e5124b32f8f75b71a1dd5cce0f805077dd71. Jan 13 20:26:27.536548 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:26:27.561982 containerd[1460]: time="2025-01-13T20:26:27.561945907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-frdxq,Uid:5c38aa9b-4164-40c3-a88c-765f62adf824,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c1a02d98e3091e5d58e89dfe2e7e5124b32f8f75b71a1dd5cce0f805077dd71\"" Jan 13 20:26:27.562727 kubelet[2478]: E0113 20:26:27.562705 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:27.565940 containerd[1460]: time="2025-01-13T20:26:27.565805407Z" level=info msg="CreateContainer within sandbox \"0c1a02d98e3091e5d58e89dfe2e7e5124b32f8f75b71a1dd5cce0f805077dd71\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:26:27.585023 containerd[1460]: time="2025-01-13T20:26:27.584970848Z" level=info msg="CreateContainer within sandbox \"0c1a02d98e3091e5d58e89dfe2e7e5124b32f8f75b71a1dd5cce0f805077dd71\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a274dfca22d84968d257c2d7925991b1e5db602335f5b52c2fa956a497fd6ad0\"" Jan 13 20:26:27.585963 containerd[1460]: time="2025-01-13T20:26:27.585858421Z" level=info msg="StartContainer for \"a274dfca22d84968d257c2d7925991b1e5db602335f5b52c2fa956a497fd6ad0\"" Jan 13 20:26:27.612882 systemd[1]: Started cri-containerd-a274dfca22d84968d257c2d7925991b1e5db602335f5b52c2fa956a497fd6ad0.scope - libcontainer container a274dfca22d84968d257c2d7925991b1e5db602335f5b52c2fa956a497fd6ad0. Jan 13 20:26:27.637245 containerd[1460]: time="2025-01-13T20:26:27.637204059Z" level=info msg="StartContainer for \"a274dfca22d84968d257c2d7925991b1e5db602335f5b52c2fa956a497fd6ad0\" returns successfully" Jan 13 20:26:28.419202 kubelet[2478]: E0113 20:26:28.419158 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:28.419714 containerd[1460]: time="2025-01-13T20:26:28.419554502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7q64p,Uid:05d5274c-bac5-4e65-b098-e4b0aa3ca4c6,Namespace:kube-system,Attempt:0,}" Jan 13 20:26:28.449498 systemd-networkd[1383]: veth843ef4df: Link UP Jan 13 20:26:28.462019 kernel: cni0: port 2(veth843ef4df) entered blocking state Jan 13 20:26:28.462148 kernel: cni0: port 2(veth843ef4df) entered disabled state Jan 13 20:26:28.462166 kernel: veth843ef4df: entered allmulticast mode Jan 13 20:26:28.462191 kernel: veth843ef4df: entered promiscuous mode Jan 13 20:26:28.464966 kernel: cni0: port 2(veth843ef4df) entered blocking state Jan 13 20:26:28.465019 kernel: cni0: port 2(veth843ef4df) entered forwarding state Jan 13 20:26:28.466703 kernel: cni0: port 2(veth843ef4df) entered disabled state Jan 13 20:26:28.473571 systemd-networkd[1383]: veth843ef4df: Gained carrier Jan 13 20:26:28.473881 kernel: cni0: port 2(veth843ef4df) entered blocking state Jan 13 20:26:28.473901 kernel: cni0: port 2(veth843ef4df) entered forwarding state Jan 13 20:26:28.478030 containerd[1460]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000018938), "name":"cbr0", "type":"bridge"} Jan 13 20:26:28.478030 containerd[1460]: delegateAdd: netconf sent to delegate plugin: Jan 13 20:26:28.486161 kubelet[2478]: E0113 20:26:28.486124 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:28.502674 containerd[1460]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-13T20:26:28.501748911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:26:28.502674 containerd[1460]: time="2025-01-13T20:26:28.501841045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:26:28.502674 containerd[1460]: time="2025-01-13T20:26:28.501856447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:26:28.502674 containerd[1460]: time="2025-01-13T20:26:28.501992347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:26:28.528352 kubelet[2478]: I0113 20:26:28.528283 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-frdxq" podStartSLOduration=18.528266653 podStartE2EDuration="18.528266653s" podCreationTimestamp="2025-01-13 20:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:26:28.501624253 +0000 UTC m=+25.164700745" watchObservedRunningTime="2025-01-13 20:26:28.528266653 +0000 UTC m=+25.191343145" Jan 13 20:26:28.531862 systemd[1]: Started cri-containerd-0e8a7f24ad9f0491a5d79932deb4dec89b725400d28b6126a5e34fb2fe97405b.scope - libcontainer container 0e8a7f24ad9f0491a5d79932deb4dec89b725400d28b6126a5e34fb2fe97405b. Jan 13 20:26:28.543195 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:26:28.561052 containerd[1460]: time="2025-01-13T20:26:28.561016822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7q64p,Uid:05d5274c-bac5-4e65-b098-e4b0aa3ca4c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e8a7f24ad9f0491a5d79932deb4dec89b725400d28b6126a5e34fb2fe97405b\"" Jan 13 20:26:28.561841 kubelet[2478]: E0113 20:26:28.561808 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:28.563848 containerd[1460]: time="2025-01-13T20:26:28.563800867Z" level=info msg="CreateContainer within sandbox \"0e8a7f24ad9f0491a5d79932deb4dec89b725400d28b6126a5e34fb2fe97405b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:26:28.766307 containerd[1460]: time="2025-01-13T20:26:28.766183977Z" level=info msg="CreateContainer within sandbox \"0e8a7f24ad9f0491a5d79932deb4dec89b725400d28b6126a5e34fb2fe97405b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f9ed455da9c3b0e2bb437fe038e98d5ea0e419f711b611488f67652ecbf4060\"" Jan 13 20:26:28.766704 containerd[1460]: time="2025-01-13T20:26:28.766646485Z" level=info msg="StartContainer for \"5f9ed455da9c3b0e2bb437fe038e98d5ea0e419f711b611488f67652ecbf4060\"" Jan 13 20:26:28.794887 systemd[1]: Started cri-containerd-5f9ed455da9c3b0e2bb437fe038e98d5ea0e419f711b611488f67652ecbf4060.scope - libcontainer container 5f9ed455da9c3b0e2bb437fe038e98d5ea0e419f711b611488f67652ecbf4060. Jan 13 20:26:28.838564 containerd[1460]: time="2025-01-13T20:26:28.838511509Z" level=info msg="StartContainer for \"5f9ed455da9c3b0e2bb437fe038e98d5ea0e419f711b611488f67652ecbf4060\" returns successfully" Jan 13 20:26:29.074900 systemd-networkd[1383]: cni0: Gained IPv6LL Jan 13 20:26:29.191465 systemd[1]: Started sshd@5-10.0.0.128:22-10.0.0.1:40404.service - OpenSSH per-connection server daemon (10.0.0.1:40404). Jan 13 20:26:29.236005 sshd[3412]: Accepted publickey for core from 10.0.0.1 port 40404 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:26:29.237597 sshd-session[3412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:26:29.242113 systemd-logind[1443]: New session 6 of user core. Jan 13 20:26:29.248893 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:26:29.371098 sshd[3414]: Connection closed by 10.0.0.1 port 40404 Jan 13 20:26:29.372530 sshd-session[3412]: pam_unix(sshd:session): session closed for user core Jan 13 20:26:29.375985 systemd[1]: sshd@5-10.0.0.128:22-10.0.0.1:40404.service: Deactivated successfully. Jan 13 20:26:29.378288 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:26:29.379349 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:26:29.380334 systemd-logind[1443]: Removed session 6. Jan 13 20:26:29.458875 systemd-networkd[1383]: veth7a220076: Gained IPv6LL Jan 13 20:26:29.487975 kubelet[2478]: E0113 20:26:29.487934 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:29.488836 kubelet[2478]: E0113 20:26:29.488775 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:29.506911 kubelet[2478]: I0113 20:26:29.506696 2478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-7q64p" podStartSLOduration=19.506678284 podStartE2EDuration="19.506678284s" podCreationTimestamp="2025-01-13 20:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:26:29.506268026 +0000 UTC m=+26.169344518" watchObservedRunningTime="2025-01-13 20:26:29.506678284 +0000 UTC m=+26.169754776" Jan 13 20:26:30.162826 systemd-networkd[1383]: veth843ef4df: Gained IPv6LL Jan 13 20:26:30.491302 kubelet[2478]: E0113 20:26:30.490703 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:30.491302 kubelet[2478]: E0113 20:26:30.490783 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:31.491847 kubelet[2478]: E0113 20:26:31.491807 2478 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:26:34.382576 systemd[1]: Started sshd@6-10.0.0.128:22-10.0.0.1:59494.service - OpenSSH per-connection server daemon (10.0.0.1:59494). Jan 13 20:26:34.447513 sshd[3455]: Accepted publickey for core from 10.0.0.1 port 59494 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:26:34.449045 sshd-session[3455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:26:34.454589 systemd-logind[1443]: New session 7 of user core. Jan 13 20:26:34.465882 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:26:34.580474 sshd[3457]: Connection closed by 10.0.0.1 port 59494 Jan 13 20:26:34.581015 sshd-session[3455]: pam_unix(sshd:session): session closed for user core Jan 13 20:26:34.584509 systemd[1]: sshd@6-10.0.0.128:22-10.0.0.1:59494.service: Deactivated successfully. Jan 13 20:26:34.586523 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:26:34.587257 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:26:34.588124 systemd-logind[1443]: Removed session 7. Jan 13 20:26:39.600742 systemd[1]: Started sshd@7-10.0.0.128:22-10.0.0.1:59500.service - OpenSSH per-connection server daemon (10.0.0.1:59500). Jan 13 20:26:39.651533 sshd[3492]: Accepted publickey for core from 10.0.0.1 port 59500 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:26:39.652831 sshd-session[3492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:26:39.659048 systemd-logind[1443]: New session 8 of user core. Jan 13 20:26:39.670880 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:26:39.793529 sshd[3494]: Connection closed by 10.0.0.1 port 59500 Jan 13 20:26:39.794099 sshd-session[3492]: pam_unix(sshd:session): session closed for user core Jan 13 20:26:39.806261 systemd[1]: sshd@7-10.0.0.128:22-10.0.0.1:59500.service: Deactivated successfully. Jan 13 20:26:39.809617 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:26:39.812216 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:26:39.823182 systemd[1]: Started sshd@8-10.0.0.128:22-10.0.0.1:59506.service - OpenSSH per-connection server daemon (10.0.0.1:59506). Jan 13 20:26:39.824026 systemd-logind[1443]: Removed session 8. Jan 13 20:26:39.870685 sshd[3507]: Accepted publickey for core from 10.0.0.1 port 59506 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:26:39.871839 sshd-session[3507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:26:39.876540 systemd-logind[1443]: New session 9 of user core. Jan 13 20:26:39.887840 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:26:40.061990 sshd[3509]: Connection closed by 10.0.0.1 port 59506 Jan 13 20:26:40.062889 sshd-session[3507]: pam_unix(sshd:session): session closed for user core Jan 13 20:26:40.073782 systemd[1]: sshd@8-10.0.0.128:22-10.0.0.1:59506.service: Deactivated successfully. Jan 13 20:26:40.077932 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:26:40.080354 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:26:40.091974 systemd[1]: Started sshd@9-10.0.0.128:22-10.0.0.1:59510.service - OpenSSH per-connection server daemon (10.0.0.1:59510). Jan 13 20:26:40.092786 systemd-logind[1443]: Removed session 9. Jan 13 20:26:40.128688 sshd[3519]: Accepted publickey for core from 10.0.0.1 port 59510 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:26:40.129176 sshd-session[3519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:26:40.133119 systemd-logind[1443]: New session 10 of user core. Jan 13 20:26:40.142858 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:26:40.251286 sshd[3521]: Connection closed by 10.0.0.1 port 59510 Jan 13 20:26:40.250508 sshd-session[3519]: pam_unix(sshd:session): session closed for user core Jan 13 20:26:40.253506 systemd[1]: sshd@9-10.0.0.128:22-10.0.0.1:59510.service: Deactivated successfully. Jan 13 20:26:40.257099 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:26:40.257847 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:26:40.258736 systemd-logind[1443]: Removed session 10. Jan 13 20:26:45.265009 systemd[1]: Started sshd@10-10.0.0.128:22-10.0.0.1:57412.service - OpenSSH per-connection server daemon (10.0.0.1:57412). Jan 13 20:26:45.307374 sshd[3557]: Accepted publickey for core from 10.0.0.1 port 57412 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:26:45.308558 sshd-session[3557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:26:45.312048 systemd-logind[1443]: New session 11 of user core. Jan 13 20:26:45.322811 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:26:45.432463 sshd[3559]: Connection closed by 10.0.0.1 port 57412 Jan 13 20:26:45.433009 sshd-session[3557]: pam_unix(sshd:session): session closed for user core Jan 13 20:26:45.442168 systemd[1]: sshd@10-10.0.0.128:22-10.0.0.1:57412.service: Deactivated successfully. Jan 13 20:26:45.443496 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:26:45.444705 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:26:45.453888 systemd[1]: Started sshd@11-10.0.0.128:22-10.0.0.1:57420.service - OpenSSH per-connection server daemon (10.0.0.1:57420). Jan 13 20:26:45.454833 systemd-logind[1443]: Removed session 11. Jan 13 20:26:45.490104 sshd[3572]: Accepted publickey for core from 10.0.0.1 port 57420 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:26:45.491297 sshd-session[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:26:45.494735 systemd-logind[1443]: New session 12 of user core. Jan 13 20:26:45.508888 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:26:45.686228 sshd[3574]: Connection closed by 10.0.0.1 port 57420 Jan 13 20:26:45.686620 sshd-session[3572]: pam_unix(sshd:session): session closed for user core Jan 13 20:26:45.701178 systemd[1]: sshd@11-10.0.0.128:22-10.0.0.1:57420.service: Deactivated successfully. Jan 13 20:26:45.702689 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:26:45.703884 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:26:45.711920 systemd[1]: Started sshd@12-10.0.0.128:22-10.0.0.1:57428.service - OpenSSH per-connection server daemon (10.0.0.1:57428). Jan 13 20:26:45.712945 systemd-logind[1443]: Removed session 12. Jan 13 20:26:45.753053 sshd[3585]: Accepted publickey for core from 10.0.0.1 port 57428 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:26:45.754320 sshd-session[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:26:45.758249 systemd-logind[1443]: New session 13 of user core. Jan 13 20:26:45.773806 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:26:46.972259 sshd[3593]: Connection closed by 10.0.0.1 port 57428 Jan 13 20:26:46.972951 sshd-session[3585]: pam_unix(sshd:session): session closed for user core Jan 13 20:26:46.983278 systemd[1]: sshd@12-10.0.0.128:22-10.0.0.1:57428.service: Deactivated successfully. Jan 13 20:26:46.985530 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:26:46.990169 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:26:46.998251 systemd[1]: Started sshd@13-10.0.0.128:22-10.0.0.1:57444.service - OpenSSH per-connection server daemon (10.0.0.1:57444). Jan 13 20:26:46.999728 systemd-logind[1443]: Removed session 13. Jan 13 20:26:47.042960 sshd[3625]: Accepted publickey for core from 10.0.0.1 port 57444 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:26:47.044189 sshd-session[3625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:26:47.047756 systemd-logind[1443]: New session 14 of user core. Jan 13 20:26:47.056791 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:26:47.280366 sshd[3628]: Connection closed by 10.0.0.1 port 57444 Jan 13 20:26:47.280805 sshd-session[3625]: pam_unix(sshd:session): session closed for user core Jan 13 20:26:47.289961 systemd[1]: sshd@13-10.0.0.128:22-10.0.0.1:57444.service: Deactivated successfully. Jan 13 20:26:47.291414 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:26:47.293223 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:26:47.302962 systemd[1]: Started sshd@14-10.0.0.128:22-10.0.0.1:57454.service - OpenSSH per-connection server daemon (10.0.0.1:57454). Jan 13 20:26:47.304251 systemd-logind[1443]: Removed session 14. Jan 13 20:26:47.340232 sshd[3639]: Accepted publickey for core from 10.0.0.1 port 57454 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:26:47.341551 sshd-session[3639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:26:47.345563 systemd-logind[1443]: New session 15 of user core. Jan 13 20:26:47.360819 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:26:47.468669 sshd[3641]: Connection closed by 10.0.0.1 port 57454 Jan 13 20:26:47.469184 sshd-session[3639]: pam_unix(sshd:session): session closed for user core Jan 13 20:26:47.472407 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:26:47.472711 systemd[1]: sshd@14-10.0.0.128:22-10.0.0.1:57454.service: Deactivated successfully. Jan 13 20:26:47.474421 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:26:47.475329 systemd-logind[1443]: Removed session 15. Jan 13 20:26:52.480336 systemd[1]: Started sshd@15-10.0.0.128:22-10.0.0.1:37580.service - OpenSSH per-connection server daemon (10.0.0.1:37580). Jan 13 20:26:52.519325 sshd[3677]: Accepted publickey for core from 10.0.0.1 port 37580 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:26:52.520533 sshd-session[3677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:26:52.524175 systemd-logind[1443]: New session 16 of user core. Jan 13 20:26:52.531822 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:26:52.636334 sshd[3679]: Connection closed by 10.0.0.1 port 37580 Jan 13 20:26:52.636243 sshd-session[3677]: pam_unix(sshd:session): session closed for user core Jan 13 20:26:52.639546 systemd[1]: sshd@15-10.0.0.128:22-10.0.0.1:37580.service: Deactivated successfully. Jan 13 20:26:52.643176 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:26:52.644691 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:26:52.645557 systemd-logind[1443]: Removed session 16. Jan 13 20:26:57.647122 systemd[1]: Started sshd@16-10.0.0.128:22-10.0.0.1:37590.service - OpenSSH per-connection server daemon (10.0.0.1:37590). Jan 13 20:26:57.685978 sshd[3713]: Accepted publickey for core from 10.0.0.1 port 37590 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:26:57.687204 sshd-session[3713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:26:57.691304 systemd-logind[1443]: New session 17 of user core. Jan 13 20:26:57.701813 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:26:57.807446 sshd[3715]: Connection closed by 10.0.0.1 port 37590 Jan 13 20:26:57.807812 sshd-session[3713]: pam_unix(sshd:session): session closed for user core Jan 13 20:26:57.810824 systemd[1]: sshd@16-10.0.0.128:22-10.0.0.1:37590.service: Deactivated successfully. Jan 13 20:26:57.812476 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:26:57.814795 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:26:57.815820 systemd-logind[1443]: Removed session 17. Jan 13 20:27:02.818020 systemd[1]: Started sshd@17-10.0.0.128:22-10.0.0.1:47016.service - OpenSSH per-connection server daemon (10.0.0.1:47016). Jan 13 20:27:02.856606 sshd[3748]: Accepted publickey for core from 10.0.0.1 port 47016 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:27:02.857717 sshd-session[3748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:27:02.861860 systemd-logind[1443]: New session 18 of user core. Jan 13 20:27:02.867820 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:27:02.978251 sshd[3750]: Connection closed by 10.0.0.1 port 47016 Jan 13 20:27:02.978575 sshd-session[3748]: pam_unix(sshd:session): session closed for user core Jan 13 20:27:02.981646 systemd[1]: sshd@17-10.0.0.128:22-10.0.0.1:47016.service: Deactivated successfully. Jan 13 20:27:02.984432 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:27:02.985296 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:27:02.986448 systemd-logind[1443]: Removed session 18.