Jan 29 11:14:18.910411 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:14:18.910433 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:37:00 -00 2025 Jan 29 11:14:18.910443 kernel: KASLR enabled Jan 29 11:14:18.910449 kernel: efi: EFI v2.7 by EDK II Jan 29 11:14:18.910454 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Jan 29 11:14:18.910460 kernel: random: crng init done Jan 29 11:14:18.910467 kernel: secureboot: Secure boot disabled Jan 29 11:14:18.910472 kernel: ACPI: Early table checksum verification disabled Jan 29 11:14:18.910478 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 29 11:14:18.910486 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:14:18.910492 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:14:18.910498 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:14:18.910504 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:14:18.910510 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:14:18.910517 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:14:18.910524 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:14:18.910530 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:14:18.910543 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:14:18.910549 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:14:18.910555 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 11:14:18.910561 kernel: NUMA: Failed to initialise from firmware Jan 29 11:14:18.910568 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:14:18.910574 kernel: NUMA: NODE_DATA [mem 0xdc956800-0xdc95bfff] Jan 29 11:14:18.910580 kernel: Zone ranges: Jan 29 11:14:18.910586 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:14:18.910593 kernel: DMA32 empty Jan 29 11:14:18.910599 kernel: Normal empty Jan 29 11:14:18.910605 kernel: Movable zone start for each node Jan 29 11:14:18.910611 kernel: Early memory node ranges Jan 29 11:14:18.910618 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 29 11:14:18.910624 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 11:14:18.910630 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 11:14:18.910636 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 11:14:18.910642 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 11:14:18.910648 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 11:14:18.910654 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 11:14:18.910660 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:14:18.910667 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 11:14:18.910673 kernel: psci: probing for conduit method from ACPI. Jan 29 11:14:18.910680 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:14:18.910689 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:14:18.910695 kernel: psci: Trusted OS migration not required Jan 29 11:14:18.910702 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:14:18.910710 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 11:14:18.910716 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:14:18.910731 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:14:18.910739 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 11:14:18.910746 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:14:18.910752 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:14:18.910759 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:14:18.910765 kernel: CPU features: detected: Spectre-v4 Jan 29 11:14:18.910772 kernel: CPU features: detected: Spectre-BHB Jan 29 11:14:18.910779 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:14:18.910787 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:14:18.910794 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:14:18.910800 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:14:18.910807 kernel: alternatives: applying boot alternatives Jan 29 11:14:18.910814 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:14:18.910821 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:14:18.910827 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:14:18.910834 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:14:18.910840 kernel: Fallback order for Node 0: 0 Jan 29 11:14:18.910847 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 11:14:18.910854 kernel: Policy zone: DMA Jan 29 11:14:18.910861 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:14:18.910868 kernel: software IO TLB: area num 4. Jan 29 11:14:18.910875 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 11:14:18.910881 kernel: Memory: 2386316K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 185972K reserved, 0K cma-reserved) Jan 29 11:14:18.910888 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:14:18.910895 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:14:18.910902 kernel: rcu: RCU event tracing is enabled. Jan 29 11:14:18.910908 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:14:18.910915 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:14:18.910922 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:14:18.910928 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:14:18.910935 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:14:18.910943 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:14:18.910949 kernel: GICv3: 256 SPIs implemented Jan 29 11:14:18.910956 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:14:18.910962 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:14:18.910968 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:14:18.910975 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 11:14:18.910981 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 11:14:18.910988 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:14:18.910995 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:14:18.911001 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 11:14:18.911008 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 11:14:18.911015 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:14:18.911022 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:14:18.911029 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:14:18.911035 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:14:18.911042 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:14:18.911049 kernel: arm-pv: using stolen time PV Jan 29 11:14:18.911056 kernel: Console: colour dummy device 80x25 Jan 29 11:14:18.911062 kernel: ACPI: Core revision 20230628 Jan 29 11:14:18.911069 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:14:18.911076 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:14:18.911084 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:14:18.911091 kernel: landlock: Up and running. Jan 29 11:14:18.911097 kernel: SELinux: Initializing. Jan 29 11:14:18.911104 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:14:18.911111 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:14:18.911118 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:14:18.911125 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:14:18.911132 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:14:18.911139 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:14:18.911146 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 11:14:18.911154 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 11:14:18.911160 kernel: Remapping and enabling EFI services. Jan 29 11:14:18.911167 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:14:18.911174 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:14:18.911181 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 11:14:18.911188 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 11:14:18.911195 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:14:18.911201 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:14:18.911208 kernel: Detected PIPT I-cache on CPU2 Jan 29 11:14:18.911216 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 11:14:18.911223 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 11:14:18.911234 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:14:18.911243 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 11:14:18.911250 kernel: Detected PIPT I-cache on CPU3 Jan 29 11:14:18.911257 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 11:14:18.911264 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 11:14:18.911271 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:14:18.911278 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 11:14:18.911287 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:14:18.911294 kernel: SMP: Total of 4 processors activated. Jan 29 11:14:18.911301 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:14:18.911308 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:14:18.911315 kernel: CPU features: detected: Common not Private translations Jan 29 11:14:18.911322 kernel: CPU features: detected: CRC32 instructions Jan 29 11:14:18.911329 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 11:14:18.911336 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:14:18.911344 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:14:18.911352 kernel: CPU features: detected: Privileged Access Never Jan 29 11:14:18.911359 kernel: CPU features: detected: RAS Extension Support Jan 29 11:14:18.911366 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 11:14:18.911373 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:14:18.911380 kernel: alternatives: applying system-wide alternatives Jan 29 11:14:18.911387 kernel: devtmpfs: initialized Jan 29 11:14:18.911394 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:14:18.911402 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:14:18.911410 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:14:18.911417 kernel: SMBIOS 3.0.0 present. Jan 29 11:14:18.911424 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 29 11:14:18.911432 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:14:18.911439 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:14:18.911446 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:14:18.911453 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:14:18.911461 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:14:18.911468 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Jan 29 11:14:18.911476 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:14:18.911483 kernel: cpuidle: using governor menu Jan 29 11:14:18.911491 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:14:18.911498 kernel: ASID allocator initialised with 32768 entries Jan 29 11:14:18.911505 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:14:18.911512 kernel: Serial: AMBA PL011 UART driver Jan 29 11:14:18.911519 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:14:18.911526 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:14:18.911533 kernel: Modules: 508960 pages in range for PLT usage Jan 29 11:14:18.911547 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:14:18.911554 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:14:18.911561 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:14:18.911569 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:14:18.911576 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:14:18.911583 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:14:18.911590 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:14:18.911597 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:14:18.911604 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:14:18.911613 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:14:18.911620 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:14:18.911627 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:14:18.911634 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:14:18.911641 kernel: ACPI: Interpreter enabled Jan 29 11:14:18.911648 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:14:18.911656 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:14:18.911663 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:14:18.911670 kernel: printk: console [ttyAMA0] enabled Jan 29 11:14:18.911679 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:14:18.911873 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:14:18.911951 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:14:18.912017 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:14:18.912081 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 11:14:18.912143 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 11:14:18.912153 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 11:14:18.912163 kernel: PCI host bridge to bus 0000:00 Jan 29 11:14:18.912232 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 11:14:18.912290 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:14:18.912347 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 11:14:18.912403 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:14:18.912480 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 11:14:18.912568 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:14:18.912641 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 11:14:18.912708 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 11:14:18.912789 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:14:18.912855 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:14:18.912920 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 11:14:18.912984 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 11:14:18.913042 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 11:14:18.913117 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:14:18.913175 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 11:14:18.913184 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:14:18.913192 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:14:18.913199 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:14:18.913206 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:14:18.913213 kernel: iommu: Default domain type: Translated Jan 29 11:14:18.913221 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:14:18.913230 kernel: efivars: Registered efivars operations Jan 29 11:14:18.913237 kernel: vgaarb: loaded Jan 29 11:14:18.913244 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:14:18.913251 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:14:18.913258 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:14:18.913266 kernel: pnp: PnP ACPI init Jan 29 11:14:18.913342 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 11:14:18.913352 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:14:18.913362 kernel: NET: Registered PF_INET protocol family Jan 29 11:14:18.913369 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:14:18.913377 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:14:18.913384 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:14:18.913391 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:14:18.913398 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:14:18.913405 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:14:18.913412 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:14:18.913420 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:14:18.913429 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:14:18.913436 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:14:18.913443 kernel: kvm [1]: HYP mode not available Jan 29 11:14:18.913450 kernel: Initialise system trusted keyrings Jan 29 11:14:18.913457 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:14:18.913464 kernel: Key type asymmetric registered Jan 29 11:14:18.913471 kernel: Asymmetric key parser 'x509' registered Jan 29 11:14:18.913478 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:14:18.913485 kernel: io scheduler mq-deadline registered Jan 29 11:14:18.913494 kernel: io scheduler kyber registered Jan 29 11:14:18.913501 kernel: io scheduler bfq registered Jan 29 11:14:18.913508 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:14:18.913515 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:14:18.913522 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:14:18.913596 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 11:14:18.913606 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:14:18.913614 kernel: thunder_xcv, ver 1.0 Jan 29 11:14:18.913621 kernel: thunder_bgx, ver 1.0 Jan 29 11:14:18.913630 kernel: nicpf, ver 1.0 Jan 29 11:14:18.913637 kernel: nicvf, ver 1.0 Jan 29 11:14:18.913709 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:14:18.913826 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:14:18 UTC (1738149258) Jan 29 11:14:18.913837 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:14:18.913844 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 11:14:18.913852 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:14:18.913859 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:14:18.913869 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:14:18.913876 kernel: Segment Routing with IPv6 Jan 29 11:14:18.913883 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:14:18.913890 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:14:18.913897 kernel: Key type dns_resolver registered Jan 29 11:14:18.913904 kernel: registered taskstats version 1 Jan 29 11:14:18.913911 kernel: Loading compiled-in X.509 certificates Jan 29 11:14:18.913919 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f3333311a24aa8c58222f4e98a07eaa1f186ad1a' Jan 29 11:14:18.913926 kernel: Key type .fscrypt registered Jan 29 11:14:18.913934 kernel: Key type fscrypt-provisioning registered Jan 29 11:14:18.913941 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:14:18.913949 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:14:18.913956 kernel: ima: No architecture policies found Jan 29 11:14:18.913963 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:14:18.913970 kernel: clk: Disabling unused clocks Jan 29 11:14:18.913976 kernel: Freeing unused kernel memory: 39680K Jan 29 11:14:18.913983 kernel: Run /init as init process Jan 29 11:14:18.913990 kernel: with arguments: Jan 29 11:14:18.913998 kernel: /init Jan 29 11:14:18.914005 kernel: with environment: Jan 29 11:14:18.914012 kernel: HOME=/ Jan 29 11:14:18.914019 kernel: TERM=linux Jan 29 11:14:18.914026 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:14:18.914034 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:14:18.914043 systemd[1]: Detected virtualization kvm. Jan 29 11:14:18.914051 systemd[1]: Detected architecture arm64. Jan 29 11:14:18.914060 systemd[1]: Running in initrd. Jan 29 11:14:18.914067 systemd[1]: No hostname configured, using default hostname. Jan 29 11:14:18.914074 systemd[1]: Hostname set to . Jan 29 11:14:18.914082 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:14:18.914090 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:14:18.914097 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:14:18.914105 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:14:18.914113 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:14:18.914122 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:14:18.914130 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:14:18.914138 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:14:18.914147 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:14:18.914155 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:14:18.914162 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:14:18.914171 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:14:18.914179 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:14:18.914186 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:14:18.914194 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:14:18.914201 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:14:18.914209 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:14:18.914217 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:14:18.914224 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:14:18.914232 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:14:18.914241 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:14:18.914249 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:14:18.914256 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:14:18.914264 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:14:18.914272 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:14:18.914279 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:14:18.914287 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:14:18.914294 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:14:18.914302 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:14:18.914312 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:14:18.914319 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:14:18.914327 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:14:18.914334 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:14:18.914342 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:14:18.914350 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:14:18.914359 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:14:18.914383 systemd-journald[239]: Collecting audit messages is disabled. Jan 29 11:14:18.914404 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:14:18.914412 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:14:18.914420 systemd-journald[239]: Journal started Jan 29 11:14:18.914439 systemd-journald[239]: Runtime Journal (/run/log/journal/7bd8db0b14c74af9ba8235c18b7f94bc) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:14:18.907267 systemd-modules-load[240]: Inserted module 'overlay' Jan 29 11:14:18.917490 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:14:18.920742 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:14:18.922514 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 29 11:14:18.923582 kernel: Bridge firewalling registered Jan 29 11:14:18.926904 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:14:18.928593 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:14:18.931066 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:14:18.935058 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:14:18.936269 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:14:18.942157 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:14:18.945333 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:14:18.946676 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:14:18.956920 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:14:18.959173 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:14:18.967216 dracut-cmdline[277]: dracut-dracut-053 Jan 29 11:14:18.969702 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:14:18.984907 systemd-resolved[279]: Positive Trust Anchors: Jan 29 11:14:18.984980 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:14:18.985010 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:14:18.989630 systemd-resolved[279]: Defaulting to hostname 'linux'. Jan 29 11:14:18.992380 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:14:18.994486 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:14:19.035745 kernel: SCSI subsystem initialized Jan 29 11:14:19.039745 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:14:19.047752 kernel: iscsi: registered transport (tcp) Jan 29 11:14:19.061857 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:14:19.061875 kernel: QLogic iSCSI HBA Driver Jan 29 11:14:19.105778 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:14:19.117888 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:14:19.137258 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:14:19.137306 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:14:19.138909 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:14:19.185754 kernel: raid6: neonx8 gen() 15777 MB/s Jan 29 11:14:19.202745 kernel: raid6: neonx4 gen() 15647 MB/s Jan 29 11:14:19.219743 kernel: raid6: neonx2 gen() 13249 MB/s Jan 29 11:14:19.236745 kernel: raid6: neonx1 gen() 10483 MB/s Jan 29 11:14:19.253743 kernel: raid6: int64x8 gen() 6946 MB/s Jan 29 11:14:19.270743 kernel: raid6: int64x4 gen() 7350 MB/s Jan 29 11:14:19.287742 kernel: raid6: int64x2 gen() 6130 MB/s Jan 29 11:14:19.304898 kernel: raid6: int64x1 gen() 5059 MB/s Jan 29 11:14:19.304923 kernel: raid6: using algorithm neonx8 gen() 15777 MB/s Jan 29 11:14:19.322887 kernel: raid6: .... xor() 11923 MB/s, rmw enabled Jan 29 11:14:19.322929 kernel: raid6: using neon recovery algorithm Jan 29 11:14:19.327744 kernel: xor: measuring software checksum speed Jan 29 11:14:19.329035 kernel: 8regs : 17436 MB/sec Jan 29 11:14:19.329048 kernel: 32regs : 19636 MB/sec Jan 29 11:14:19.330336 kernel: arm64_neon : 26927 MB/sec Jan 29 11:14:19.330347 kernel: xor: using function: arm64_neon (26927 MB/sec) Jan 29 11:14:19.380886 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:14:19.394793 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:14:19.404876 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:14:19.416399 systemd-udevd[463]: Using default interface naming scheme 'v255'. Jan 29 11:14:19.419689 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:14:19.428047 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:14:19.439522 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Jan 29 11:14:19.466789 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:14:19.477870 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:14:19.517073 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:14:19.526412 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:14:19.536077 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:14:19.537718 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:14:19.539625 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:14:19.541999 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:14:19.550907 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:14:19.563742 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 11:14:19.580010 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:14:19.580111 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:14:19.580122 kernel: GPT:9289727 != 19775487 Jan 29 11:14:19.580131 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:14:19.580140 kernel: GPT:9289727 != 19775487 Jan 29 11:14:19.580152 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:14:19.580162 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:14:19.565080 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:14:19.572456 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:14:19.572577 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:14:19.582651 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:14:19.584651 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:14:19.584832 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:14:19.586959 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:14:19.600049 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:14:19.604774 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (514) Jan 29 11:14:19.604794 kernel: BTRFS: device fsid b5bc7ecc-f31a-46c7-9582-5efca7819025 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (518) Jan 29 11:14:19.614688 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:14:19.620771 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:14:19.625597 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:14:19.632780 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:14:19.636608 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:14:19.637918 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:14:19.651863 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:14:19.653621 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:14:19.662578 disk-uuid[552]: Primary Header is updated. Jan 29 11:14:19.662578 disk-uuid[552]: Secondary Entries is updated. Jan 29 11:14:19.662578 disk-uuid[552]: Secondary Header is updated. Jan 29 11:14:19.669754 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:14:19.676312 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:14:20.677968 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:14:20.678057 disk-uuid[554]: The operation has completed successfully. Jan 29 11:14:20.698582 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:14:20.698675 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:14:20.718913 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:14:20.722757 sh[573]: Success Jan 29 11:14:20.735757 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:14:20.761190 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:14:20.776906 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:14:20.779189 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:14:20.788981 kernel: BTRFS info (device dm-0): first mount of filesystem b5bc7ecc-f31a-46c7-9582-5efca7819025 Jan 29 11:14:20.789018 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:14:20.789029 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:14:20.790815 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:14:20.790832 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:14:20.795070 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:14:20.796378 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:14:20.806856 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:14:20.808335 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:14:20.817193 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:14:20.817230 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:14:20.817917 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:14:20.820095 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:14:20.827210 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:14:20.829312 kernel: BTRFS info (device vda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:14:20.833514 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:14:20.844083 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:14:20.911303 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:14:20.919895 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:14:20.945195 ignition[664]: Ignition 2.20.0 Jan 29 11:14:20.945211 systemd-networkd[765]: lo: Link UP Jan 29 11:14:20.945205 ignition[664]: Stage: fetch-offline Jan 29 11:14:20.945215 systemd-networkd[765]: lo: Gained carrier Jan 29 11:14:20.945235 ignition[664]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:14:20.945941 systemd-networkd[765]: Enumeration completed Jan 29 11:14:20.945243 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:14:20.946255 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:14:20.945389 ignition[664]: parsed url from cmdline: "" Jan 29 11:14:20.946425 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:14:20.945392 ignition[664]: no config URL provided Jan 29 11:14:20.946428 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:14:20.945396 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:14:20.947185 systemd-networkd[765]: eth0: Link UP Jan 29 11:14:20.945403 ignition[664]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:14:20.947188 systemd-networkd[765]: eth0: Gained carrier Jan 29 11:14:20.945426 ignition[664]: op(1): [started] loading QEMU firmware config module Jan 29 11:14:20.947194 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:14:20.945431 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:14:20.948432 systemd[1]: Reached target network.target - Network. Jan 29 11:14:20.955840 ignition[664]: op(1): [finished] loading QEMU firmware config module Jan 29 11:14:20.972784 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:14:20.984830 ignition[664]: parsing config with SHA512: f0969d0135ee13a9627f6d42547b9f57b8a0817ca4a30b5fbebe3b15ea4692b731baf6755abd2cd51ae0a34100221b04d7da86947dffc7d1c093f0ad64ac0def Jan 29 11:14:20.989296 unknown[664]: fetched base config from "system" Jan 29 11:14:20.989306 unknown[664]: fetched user config from "qemu" Jan 29 11:14:20.989776 ignition[664]: fetch-offline: fetch-offline passed Jan 29 11:14:20.991464 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:14:20.990036 ignition[664]: Ignition finished successfully Jan 29 11:14:20.993100 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:14:21.001885 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:14:21.012338 ignition[770]: Ignition 2.20.0 Jan 29 11:14:21.012349 ignition[770]: Stage: kargs Jan 29 11:14:21.012498 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:14:21.012507 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:14:21.013354 ignition[770]: kargs: kargs passed Jan 29 11:14:21.013394 ignition[770]: Ignition finished successfully Jan 29 11:14:21.016924 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:14:21.027933 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:14:21.037201 ignition[778]: Ignition 2.20.0 Jan 29 11:14:21.037212 ignition[778]: Stage: disks Jan 29 11:14:21.037355 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:14:21.037365 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:14:21.038190 ignition[778]: disks: disks passed Jan 29 11:14:21.040649 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:14:21.038232 ignition[778]: Ignition finished successfully Jan 29 11:14:21.042255 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:14:21.043738 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:14:21.045858 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:14:21.047294 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:14:21.049296 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:14:21.062894 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:14:21.075686 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:14:21.079162 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:14:21.087880 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:14:21.130595 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:14:21.132095 kernel: EXT4-fs (vda9): mounted filesystem bd47c032-97f4-4b3a-b174-3601de374086 r/w with ordered data mode. Quota mode: none. Jan 29 11:14:21.131820 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:14:21.144794 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:14:21.146308 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:14:21.147710 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:14:21.147769 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:14:21.157746 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (797) Jan 29 11:14:21.157772 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:14:21.157783 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:14:21.157793 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:14:21.157802 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:14:21.147790 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:14:21.152120 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:14:21.153900 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:14:21.160654 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:14:21.207405 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:14:21.211641 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:14:21.214662 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:14:21.217495 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:14:21.291052 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:14:21.302828 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:14:21.305746 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:14:21.309743 kernel: BTRFS info (device vda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:14:21.329148 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:14:21.331060 ignition[910]: INFO : Ignition 2.20.0 Jan 29 11:14:21.331060 ignition[910]: INFO : Stage: mount Jan 29 11:14:21.331060 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:14:21.331060 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:14:21.335594 ignition[910]: INFO : mount: mount passed Jan 29 11:14:21.335594 ignition[910]: INFO : Ignition finished successfully Jan 29 11:14:21.332979 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:14:21.342880 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:14:21.787785 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:14:21.796913 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:14:21.804737 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (924) Jan 29 11:14:21.806925 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:14:21.806939 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:14:21.806949 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:14:21.809747 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:14:21.810867 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:14:21.826386 ignition[941]: INFO : Ignition 2.20.0 Jan 29 11:14:21.826386 ignition[941]: INFO : Stage: files Jan 29 11:14:21.827989 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:14:21.827989 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:14:21.827989 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:14:21.831410 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:14:21.831410 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:14:21.831410 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:14:21.831410 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:14:21.831410 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:14:21.830621 unknown[941]: wrote ssh authorized keys file for user: core Jan 29 11:14:21.838788 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:14:21.838788 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 11:14:21.876597 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:14:21.978918 systemd-networkd[765]: eth0: Gained IPv6LL Jan 29 11:14:22.173229 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:14:22.173229 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:14:22.176971 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:14:22.176971 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:14:22.176971 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:14:22.176971 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:14:22.176971 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:14:22.176971 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:14:22.176971 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:14:22.176971 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:14:22.176971 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:14:22.176971 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:14:22.176971 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:14:22.176971 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:14:22.176971 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 29 11:14:22.485298 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:14:22.694134 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:14:22.694134 ignition[941]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 11:14:22.697703 ignition[941]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:14:22.697703 ignition[941]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:14:22.697703 ignition[941]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 11:14:22.697703 ignition[941]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 11:14:22.697703 ignition[941]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:14:22.697703 ignition[941]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:14:22.697703 ignition[941]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 11:14:22.697703 ignition[941]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:14:22.729005 ignition[941]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:14:22.733037 ignition[941]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:14:22.735395 ignition[941]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:14:22.735395 ignition[941]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:14:22.735395 ignition[941]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:14:22.735395 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:14:22.735395 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:14:22.735395 ignition[941]: INFO : files: files passed Jan 29 11:14:22.735395 ignition[941]: INFO : Ignition finished successfully Jan 29 11:14:22.736303 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:14:22.745916 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:14:22.747926 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:14:22.749711 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:14:22.749812 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:14:22.755449 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:14:22.758618 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:14:22.758618 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:14:22.762323 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:14:22.763030 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:14:22.765056 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:14:22.767744 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:14:22.789358 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:14:22.789491 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:14:22.791818 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:14:22.793555 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:14:22.795299 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:14:22.796123 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:14:22.811430 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:14:22.830959 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:14:22.838707 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:14:22.840984 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:14:22.842225 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:14:22.843972 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:14:22.844095 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:14:22.846506 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:14:22.847597 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:14:22.849399 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:14:22.851187 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:14:22.852918 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:14:22.854789 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:14:22.856704 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:14:22.858690 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:14:22.860459 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:14:22.862312 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:14:22.863833 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:14:22.863963 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:14:22.866256 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:14:22.868064 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:14:22.869862 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:14:22.870795 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:14:22.872876 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:14:22.872998 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:14:22.875752 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:14:22.875868 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:14:22.877736 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:14:22.879304 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:14:22.883772 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:14:22.885026 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:14:22.887056 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:14:22.888650 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:14:22.888752 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:14:22.890304 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:14:22.890382 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:14:22.891879 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:14:22.891988 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:14:22.893773 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:14:22.893876 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:14:22.906907 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:14:22.908491 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:14:22.909356 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:14:22.909483 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:14:22.911388 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:14:22.911492 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:14:22.917337 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:14:22.917430 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:14:22.921289 ignition[995]: INFO : Ignition 2.20.0 Jan 29 11:14:22.921289 ignition[995]: INFO : Stage: umount Jan 29 11:14:22.921289 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:14:22.921289 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:14:22.921289 ignition[995]: INFO : umount: umount passed Jan 29 11:14:22.921289 ignition[995]: INFO : Ignition finished successfully Jan 29 11:14:22.920433 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:14:22.920514 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:14:22.923679 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:14:22.924490 systemd[1]: Stopped target network.target - Network. Jan 29 11:14:22.925598 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:14:22.925663 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:14:22.927646 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:14:22.927690 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:14:22.929294 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:14:22.929335 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:14:22.931029 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:14:22.931072 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:14:22.933050 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:14:22.934561 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:14:22.936561 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:14:22.936655 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:14:22.938185 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:14:22.938280 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:14:22.939780 systemd-networkd[765]: eth0: DHCPv6 lease lost Jan 29 11:14:22.941740 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:14:22.941799 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:14:22.943455 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:14:22.943498 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:14:22.945515 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:14:22.945628 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:14:22.947644 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:14:22.947698 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:14:22.956822 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:14:22.958193 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:14:22.958257 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:14:22.959983 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:14:22.960027 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:14:22.961688 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:14:22.961750 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:14:22.963770 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:14:22.975256 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:14:22.975443 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:14:22.988459 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:14:22.988634 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:14:22.990849 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:14:22.990890 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:14:22.992622 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:14:22.992652 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:14:22.994459 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:14:22.994506 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:14:22.996991 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:14:22.997032 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:14:22.998795 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:14:22.998835 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:14:23.006948 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:14:23.007960 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:14:23.008014 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:14:23.010158 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:14:23.010201 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:14:23.012147 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:14:23.012191 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:14:23.014343 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:14:23.014384 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:14:23.016709 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:14:23.016816 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:14:23.019078 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:14:23.021206 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:14:23.030003 systemd[1]: Switching root. Jan 29 11:14:23.059826 systemd-journald[239]: Journal stopped Jan 29 11:14:23.786145 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 29 11:14:23.786200 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:14:23.786212 kernel: SELinux: policy capability open_perms=1 Jan 29 11:14:23.786225 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:14:23.786234 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:14:23.786244 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:14:23.786253 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:14:23.786262 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:14:23.786271 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:14:23.786283 systemd[1]: Successfully loaded SELinux policy in 32.454ms. Jan 29 11:14:23.786300 kernel: audit: type=1403 audit(1738149263.200:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:14:23.786310 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.367ms. Jan 29 11:14:23.786322 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:14:23.786332 systemd[1]: Detected virtualization kvm. Jan 29 11:14:23.786342 systemd[1]: Detected architecture arm64. Jan 29 11:14:23.786353 systemd[1]: Detected first boot. Jan 29 11:14:23.786363 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:14:23.786375 zram_generator::config[1041]: No configuration found. Jan 29 11:14:23.786388 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:14:23.786398 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:14:23.786409 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:14:23.786423 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:14:23.786434 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:14:23.786445 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:14:23.786456 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:14:23.786466 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:14:23.786478 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:14:23.786488 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:14:23.786498 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:14:23.786508 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:14:23.786519 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:14:23.786529 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:14:23.786548 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:14:23.786560 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:14:23.786571 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:14:23.786587 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:14:23.786598 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:14:23.786608 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:14:23.786619 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:14:23.786629 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:14:23.786640 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:14:23.786650 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:14:23.786662 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:14:23.786672 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:14:23.786682 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:14:23.786692 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:14:23.786703 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:14:23.786713 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:14:23.786782 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:14:23.786798 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:14:23.786808 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:14:23.786819 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:14:23.786832 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:14:23.786843 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:14:23.786853 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:14:23.786863 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:14:23.786873 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:14:23.786884 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:14:23.786894 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:14:23.786906 systemd[1]: Reached target machines.target - Containers. Jan 29 11:14:23.786918 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:14:23.786929 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:14:23.786939 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:14:23.786950 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:14:23.786960 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:14:23.786970 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:14:23.786980 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:14:23.786990 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:14:23.787000 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:14:23.787012 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:14:23.787022 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:14:23.787033 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:14:23.787043 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:14:23.787053 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:14:23.787063 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:14:23.787073 kernel: fuse: init (API version 7.39) Jan 29 11:14:23.787083 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:14:23.787092 kernel: ACPI: bus type drm_connector registered Jan 29 11:14:23.787103 kernel: loop: module loaded Jan 29 11:14:23.787113 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:14:23.787127 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:14:23.787137 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:14:23.787147 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:14:23.787157 systemd[1]: Stopped verity-setup.service. Jan 29 11:14:23.787186 systemd-journald[1112]: Collecting audit messages is disabled. Jan 29 11:14:23.787213 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:14:23.787224 systemd-journald[1112]: Journal started Jan 29 11:14:23.787249 systemd-journald[1112]: Runtime Journal (/run/log/journal/7bd8db0b14c74af9ba8235c18b7f94bc) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:14:23.585147 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:14:23.604586 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:14:23.604953 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:14:23.790475 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:14:23.791066 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:14:23.792278 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:14:23.793380 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:14:23.794601 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:14:23.795826 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:14:23.797008 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:14:23.798381 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:14:23.799871 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:14:23.800001 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:14:23.801372 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:14:23.801523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:14:23.802933 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:14:23.803071 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:14:23.804400 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:14:23.804528 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:14:23.805972 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:14:23.806097 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:14:23.807551 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:14:23.807701 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:14:23.809033 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:14:23.810507 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:14:23.812007 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:14:23.823881 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:14:23.838880 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:14:23.840987 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:14:23.842128 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:14:23.842174 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:14:23.844064 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:14:23.846322 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:14:23.848477 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:14:23.849653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:14:23.851104 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:14:23.853036 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:14:23.854260 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:14:23.857889 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:14:23.859026 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:14:23.861184 systemd-journald[1112]: Time spent on flushing to /var/log/journal/7bd8db0b14c74af9ba8235c18b7f94bc is 13.862ms for 854 entries. Jan 29 11:14:23.861184 systemd-journald[1112]: System Journal (/var/log/journal/7bd8db0b14c74af9ba8235c18b7f94bc) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:14:23.888652 systemd-journald[1112]: Received client request to flush runtime journal. Jan 29 11:14:23.888691 kernel: loop0: detected capacity change from 0 to 116808 Jan 29 11:14:23.862891 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:14:23.865031 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:14:23.870945 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:14:23.873663 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:14:23.875239 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:14:23.876619 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:14:23.878180 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:14:23.879971 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:14:23.885231 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:14:23.896915 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:14:23.901759 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:14:23.902958 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:14:23.910125 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:14:23.912854 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:14:23.917699 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Jan 29 11:14:23.917714 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Jan 29 11:14:23.921891 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:14:23.929966 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:14:23.931769 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:14:23.932623 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:14:23.934861 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:14:23.952391 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:14:23.958944 kernel: loop1: detected capacity change from 0 to 189592 Jan 29 11:14:23.963595 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:14:23.976117 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 29 11:14:23.976414 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 29 11:14:23.980243 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:14:24.004992 kernel: loop2: detected capacity change from 0 to 113536 Jan 29 11:14:24.042763 kernel: loop3: detected capacity change from 0 to 116808 Jan 29 11:14:24.051768 kernel: loop4: detected capacity change from 0 to 189592 Jan 29 11:14:24.068752 kernel: loop5: detected capacity change from 0 to 113536 Jan 29 11:14:24.086330 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:14:24.086811 (sd-merge)[1181]: Merged extensions into '/usr'. Jan 29 11:14:24.092589 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:14:24.092608 systemd[1]: Reloading... Jan 29 11:14:24.129120 zram_generator::config[1203]: No configuration found. Jan 29 11:14:24.191048 ldconfig[1147]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:14:24.232511 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:14:24.269855 systemd[1]: Reloading finished in 176 ms. Jan 29 11:14:24.301287 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:14:24.303128 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:14:24.315085 systemd[1]: Starting ensure-sysext.service... Jan 29 11:14:24.317248 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:14:24.326832 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:14:24.326846 systemd[1]: Reloading... Jan 29 11:14:24.342959 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:14:24.343210 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:14:24.343859 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:14:24.344073 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 29 11:14:24.344117 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 29 11:14:24.347259 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:14:24.347376 systemd-tmpfiles[1242]: Skipping /boot Jan 29 11:14:24.354781 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:14:24.354881 systemd-tmpfiles[1242]: Skipping /boot Jan 29 11:14:24.373883 zram_generator::config[1267]: No configuration found. Jan 29 11:14:24.458720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:14:24.493860 systemd[1]: Reloading finished in 166 ms. Jan 29 11:14:24.507559 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:14:24.520149 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:14:24.527741 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:14:24.529861 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:14:24.532273 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:14:24.537981 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:14:24.543082 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:14:24.546096 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:14:24.549480 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:14:24.551143 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:14:24.557875 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:14:24.565567 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:14:24.568961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:14:24.570308 systemd-udevd[1316]: Using default interface naming scheme 'v255'. Jan 29 11:14:24.571118 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:14:24.573972 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:14:24.577165 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:14:24.577300 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:14:24.578817 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:14:24.578934 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:14:24.580823 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:14:24.580975 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:14:24.591804 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:14:24.595418 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:14:24.608053 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:14:24.614899 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:14:24.617894 augenrules[1356]: No rules Jan 29 11:14:24.619632 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:14:24.622965 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:14:24.624098 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:14:24.625371 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:14:24.629362 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:14:24.631058 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:14:24.632660 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:14:24.632856 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:14:24.634310 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:14:24.639208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:14:24.639360 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:14:24.641095 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:14:24.641227 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:14:24.646508 systemd[1]: Finished ensure-sysext.service. Jan 29 11:14:24.647756 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1347) Jan 29 11:14:24.652526 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:14:24.657902 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:14:24.658057 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:14:24.662588 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:14:24.662761 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:14:24.673239 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 11:14:24.683902 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:14:24.687703 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:14:24.687836 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:14:24.690931 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:14:24.691956 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:14:24.697905 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:14:24.706031 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:14:24.715523 systemd-resolved[1309]: Positive Trust Anchors: Jan 29 11:14:24.717579 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:14:24.717677 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:14:24.730443 systemd-resolved[1309]: Defaulting to hostname 'linux'. Jan 29 11:14:24.730714 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:14:24.739010 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:14:24.741909 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:14:24.772877 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:14:24.774190 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:14:24.776761 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:14:24.782681 systemd-networkd[1384]: lo: Link UP Jan 29 11:14:24.782693 systemd-networkd[1384]: lo: Gained carrier Jan 29 11:14:24.783473 systemd-networkd[1384]: Enumeration completed Jan 29 11:14:24.783656 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:14:24.783956 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:14:24.783964 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:14:24.784697 systemd-networkd[1384]: eth0: Link UP Jan 29 11:14:24.784704 systemd-networkd[1384]: eth0: Gained carrier Jan 29 11:14:24.784717 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:14:24.785120 systemd[1]: Reached target network.target - Network. Jan 29 11:14:24.787340 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:14:24.788952 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:14:24.792423 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:14:24.804819 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:14:24.808087 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Jan 29 11:14:24.808645 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:14:24.808697 systemd-timesyncd[1387]: Initial clock synchronization to Wed 2025-01-29 11:14:24.961525 UTC. Jan 29 11:14:24.818404 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:14:24.820380 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:14:24.857375 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:14:24.858944 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:14:24.860044 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:14:24.861197 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:14:24.862516 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:14:24.863973 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:14:24.865105 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:14:24.866390 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:14:24.867602 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:14:24.867639 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:14:24.868581 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:14:24.870497 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:14:24.872961 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:14:24.881719 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:14:24.883987 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:14:24.885748 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:14:24.886996 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:14:24.887939 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:14:24.888869 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:14:24.888903 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:14:24.889863 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:14:24.891868 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:14:24.892880 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:14:24.895857 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:14:24.897965 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:14:24.899384 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:14:24.902911 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:14:24.904136 jq[1413]: false Jan 29 11:14:24.906014 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:14:24.909003 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:14:24.912658 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:14:24.916851 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:14:24.917804 extend-filesystems[1414]: Found loop3 Jan 29 11:14:24.918862 extend-filesystems[1414]: Found loop4 Jan 29 11:14:24.920058 extend-filesystems[1414]: Found loop5 Jan 29 11:14:24.920058 extend-filesystems[1414]: Found vda Jan 29 11:14:24.920058 extend-filesystems[1414]: Found vda1 Jan 29 11:14:24.920058 extend-filesystems[1414]: Found vda2 Jan 29 11:14:24.920058 extend-filesystems[1414]: Found vda3 Jan 29 11:14:24.920058 extend-filesystems[1414]: Found usr Jan 29 11:14:24.920058 extend-filesystems[1414]: Found vda4 Jan 29 11:14:24.920058 extend-filesystems[1414]: Found vda6 Jan 29 11:14:24.920058 extend-filesystems[1414]: Found vda7 Jan 29 11:14:24.920058 extend-filesystems[1414]: Found vda9 Jan 29 11:14:24.920058 extend-filesystems[1414]: Checking size of /dev/vda9 Jan 29 11:14:24.919271 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:14:24.941793 extend-filesystems[1414]: Resized partition /dev/vda9 Jan 29 11:14:24.919699 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:14:24.920843 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:14:24.924189 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:14:24.926255 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:14:24.943148 jq[1428]: true Jan 29 11:14:24.931372 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:14:24.932766 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:14:24.933057 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:14:24.933783 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:14:24.937611 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:14:24.938005 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:14:24.958207 extend-filesystems[1438]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:14:24.974635 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:14:24.974699 tar[1432]: linux-arm64/helm Jan 29 11:14:24.958426 dbus-daemon[1412]: [system] SELinux support is enabled Jan 29 11:14:24.960101 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:14:24.975177 jq[1436]: true Jan 29 11:14:24.972531 (ntainerd)[1444]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:14:24.973584 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:14:24.973612 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:14:24.976946 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:14:24.977004 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:14:24.989459 update_engine[1427]: I20250129 11:14:24.989317 1427 main.cc:92] Flatcar Update Engine starting Jan 29 11:14:24.996749 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1342) Jan 29 11:14:25.002037 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:14:25.003489 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:14:25.016239 extend-filesystems[1438]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:14:25.016239 extend-filesystems[1438]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:14:25.016239 extend-filesystems[1438]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:14:25.027870 update_engine[1427]: I20250129 11:14:25.003903 1427 update_check_scheduler.cc:74] Next update check in 9m0s Jan 29 11:14:25.017999 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:14:25.028013 extend-filesystems[1414]: Resized filesystem in /dev/vda9 Jan 29 11:14:25.019833 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:14:25.021091 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:14:25.021784 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:14:25.023367 systemd-logind[1425]: New seat seat0. Jan 29 11:14:25.029616 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:14:25.086879 bash[1468]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:14:25.091857 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:14:25.094420 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:14:25.094851 locksmithd[1454]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:14:25.209267 containerd[1444]: time="2025-01-29T11:14:25.209169056Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:14:25.236968 containerd[1444]: time="2025-01-29T11:14:25.236844248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:14:25.238530 containerd[1444]: time="2025-01-29T11:14:25.238302405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:14:25.238530 containerd[1444]: time="2025-01-29T11:14:25.238342558Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:14:25.238530 containerd[1444]: time="2025-01-29T11:14:25.238363349Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:14:25.238530 containerd[1444]: time="2025-01-29T11:14:25.238518540Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:14:25.238530 containerd[1444]: time="2025-01-29T11:14:25.238535988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:14:25.238695 containerd[1444]: time="2025-01-29T11:14:25.238588656Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:14:25.238695 containerd[1444]: time="2025-01-29T11:14:25.238603086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:14:25.238818 containerd[1444]: time="2025-01-29T11:14:25.238792602Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:14:25.238818 containerd[1444]: time="2025-01-29T11:14:25.238815593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:14:25.238874 containerd[1444]: time="2025-01-29T11:14:25.238829290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:14:25.238874 containerd[1444]: time="2025-01-29T11:14:25.238840011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:14:25.238977 containerd[1444]: time="2025-01-29T11:14:25.238931528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:14:25.239169 containerd[1444]: time="2025-01-29T11:14:25.239148193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:14:25.239291 containerd[1444]: time="2025-01-29T11:14:25.239255120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:14:25.239291 containerd[1444]: time="2025-01-29T11:14:25.239271303Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:14:25.239379 containerd[1444]: time="2025-01-29T11:14:25.239361638Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:14:25.239423 containerd[1444]: time="2025-01-29T11:14:25.239411779Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:14:25.242797 containerd[1444]: time="2025-01-29T11:14:25.242767905Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:14:25.242987 containerd[1444]: time="2025-01-29T11:14:25.242816986Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:14:25.242987 containerd[1444]: time="2025-01-29T11:14:25.242833700Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:14:25.242987 containerd[1444]: time="2025-01-29T11:14:25.242849272Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:14:25.242987 containerd[1444]: time="2025-01-29T11:14:25.242865374Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:14:25.243099 containerd[1444]: time="2025-01-29T11:14:25.243002751Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:14:25.243308 containerd[1444]: time="2025-01-29T11:14:25.243250846Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:14:25.243371 containerd[1444]: time="2025-01-29T11:14:25.243353573Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:14:25.243394 containerd[1444]: time="2025-01-29T11:14:25.243375097Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:14:25.243394 containerd[1444]: time="2025-01-29T11:14:25.243390139Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:14:25.243433 containerd[1444]: time="2025-01-29T11:14:25.243404815Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:14:25.243433 containerd[1444]: time="2025-01-29T11:14:25.243417737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:14:25.243433 containerd[1444]: time="2025-01-29T11:14:25.243429437Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:14:25.243490 containerd[1444]: time="2025-01-29T11:14:25.243442196Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:14:25.243490 containerd[1444]: time="2025-01-29T11:14:25.243456912Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:14:25.243490 containerd[1444]: time="2025-01-29T11:14:25.243469060Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:14:25.243490 containerd[1444]: time="2025-01-29T11:14:25.243480719Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:14:25.243585 containerd[1444]: time="2025-01-29T11:14:25.243491848Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:14:25.243585 containerd[1444]: time="2025-01-29T11:14:25.243514187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:14:25.243585 containerd[1444]: time="2025-01-29T11:14:25.243527761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:14:25.243585 containerd[1444]: time="2025-01-29T11:14:25.243541173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:14:25.243585 containerd[1444]: time="2025-01-29T11:14:25.243553647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:14:25.243585 containerd[1444]: time="2025-01-29T11:14:25.243565306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:14:25.243585 containerd[1444]: time="2025-01-29T11:14:25.243579288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:14:25.243705 containerd[1444]: time="2025-01-29T11:14:25.243591762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:14:25.243705 containerd[1444]: time="2025-01-29T11:14:25.243604807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:14:25.243705 containerd[1444]: time="2025-01-29T11:14:25.243617240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:14:25.243705 containerd[1444]: time="2025-01-29T11:14:25.243631141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:14:25.243705 containerd[1444]: time="2025-01-29T11:14:25.243649159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:14:25.243705 containerd[1444]: time="2025-01-29T11:14:25.243661837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:14:25.243705 containerd[1444]: time="2025-01-29T11:14:25.243675371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:14:25.243705 containerd[1444]: time="2025-01-29T11:14:25.243690250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:14:25.243862 containerd[1444]: time="2025-01-29T11:14:25.243709735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:14:25.243862 containerd[1444]: time="2025-01-29T11:14:25.243722943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:14:25.243862 containerd[1444]: time="2025-01-29T11:14:25.243733583Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:14:25.244059 containerd[1444]: time="2025-01-29T11:14:25.243974788Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:14:25.244059 containerd[1444]: time="2025-01-29T11:14:25.244000592Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:14:25.244059 containerd[1444]: time="2025-01-29T11:14:25.244012170Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:14:25.244059 containerd[1444]: time="2025-01-29T11:14:25.244024644Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:14:25.244059 containerd[1444]: time="2025-01-29T11:14:25.244033327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:14:25.244059 containerd[1444]: time="2025-01-29T11:14:25.244046045Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:14:25.244059 containerd[1444]: time="2025-01-29T11:14:25.244055788Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:14:25.244059 containerd[1444]: time="2025-01-29T11:14:25.244065898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:14:25.244489 containerd[1444]: time="2025-01-29T11:14:25.244436368Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:14:25.244489 containerd[1444]: time="2025-01-29T11:14:25.244492216Z" level=info msg="Connect containerd service" Jan 29 11:14:25.244770 containerd[1444]: time="2025-01-29T11:14:25.244522708Z" level=info msg="using legacy CRI server" Jan 29 11:14:25.244770 containerd[1444]: time="2025-01-29T11:14:25.244529842Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:14:25.244810 containerd[1444]: time="2025-01-29T11:14:25.244771496Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:14:25.245422 containerd[1444]: time="2025-01-29T11:14:25.245393526Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:14:25.246174 containerd[1444]: time="2025-01-29T11:14:25.245694207Z" level=info msg="Start subscribing containerd event" Jan 29 11:14:25.246174 containerd[1444]: time="2025-01-29T11:14:25.245760613Z" level=info msg="Start recovering state" Jan 29 11:14:25.246174 containerd[1444]: time="2025-01-29T11:14:25.245837863Z" level=info msg="Start event monitor" Jan 29 11:14:25.246174 containerd[1444]: time="2025-01-29T11:14:25.245850459Z" level=info msg="Start snapshots syncer" Jan 29 11:14:25.246174 containerd[1444]: time="2025-01-29T11:14:25.245860406Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:14:25.246174 containerd[1444]: time="2025-01-29T11:14:25.245869537Z" level=info msg="Start streaming server" Jan 29 11:14:25.246525 containerd[1444]: time="2025-01-29T11:14:25.246493442Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:14:25.246624 containerd[1444]: time="2025-01-29T11:14:25.246569020Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:14:25.246654 containerd[1444]: time="2025-01-29T11:14:25.246634692Z" level=info msg="containerd successfully booted in 0.040971s" Jan 29 11:14:25.247813 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:14:25.365926 tar[1432]: linux-arm64/LICENSE Jan 29 11:14:25.366051 tar[1432]: linux-arm64/README.md Jan 29 11:14:25.381797 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:14:25.624925 sshd_keygen[1442]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:14:25.643636 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:14:25.651044 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:14:25.656335 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:14:25.656516 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:14:25.659016 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:14:25.672810 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:14:25.676314 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:14:25.678964 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:14:25.680537 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:14:26.395884 systemd-networkd[1384]: eth0: Gained IPv6LL Jan 29 11:14:26.401447 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:14:26.403455 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:14:26.415981 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:14:26.418393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:14:26.420589 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:14:26.436196 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:14:26.436411 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:14:26.438195 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:14:26.440346 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:14:26.917316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:14:26.918825 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:14:26.920134 systemd[1]: Startup finished in 560ms (kernel) + 4.499s (initrd) + 3.752s (userspace) = 8.812s. Jan 29 11:14:26.920826 (kubelet)[1525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:14:27.407378 kubelet[1525]: E0129 11:14:27.407224 1525 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:14:27.409770 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:14:27.409922 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:14:31.517436 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:14:31.518559 systemd[1]: Started sshd@0-10.0.0.125:22-10.0.0.1:41754.service - OpenSSH per-connection server daemon (10.0.0.1:41754). Jan 29 11:14:31.573962 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 41754 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:14:31.575660 sshd-session[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:31.584924 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:14:31.599983 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:14:31.601935 systemd-logind[1425]: New session 1 of user core. Jan 29 11:14:31.608913 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:14:31.611117 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:14:31.617663 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:14:31.687800 systemd[1542]: Queued start job for default target default.target. Jan 29 11:14:31.702781 systemd[1542]: Created slice app.slice - User Application Slice. Jan 29 11:14:31.702831 systemd[1542]: Reached target paths.target - Paths. Jan 29 11:14:31.702844 systemd[1542]: Reached target timers.target - Timers. Jan 29 11:14:31.704148 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:14:31.713713 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:14:31.713784 systemd[1542]: Reached target sockets.target - Sockets. Jan 29 11:14:31.713796 systemd[1542]: Reached target basic.target - Basic System. Jan 29 11:14:31.713832 systemd[1542]: Reached target default.target - Main User Target. Jan 29 11:14:31.713866 systemd[1542]: Startup finished in 91ms. Jan 29 11:14:31.714154 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:14:31.715418 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:14:31.781082 systemd[1]: Started sshd@1-10.0.0.125:22-10.0.0.1:41768.service - OpenSSH per-connection server daemon (10.0.0.1:41768). Jan 29 11:14:31.829951 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 41768 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:14:31.831324 sshd-session[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:31.835644 systemd-logind[1425]: New session 2 of user core. Jan 29 11:14:31.846919 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:14:31.899769 sshd[1555]: Connection closed by 10.0.0.1 port 41768 Jan 29 11:14:31.900250 sshd-session[1553]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:31.909384 systemd[1]: sshd@1-10.0.0.125:22-10.0.0.1:41768.service: Deactivated successfully. Jan 29 11:14:31.910954 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:14:31.913359 systemd-logind[1425]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:14:31.914960 systemd[1]: Started sshd@2-10.0.0.125:22-10.0.0.1:41784.service - OpenSSH per-connection server daemon (10.0.0.1:41784). Jan 29 11:14:31.915638 systemd-logind[1425]: Removed session 2. Jan 29 11:14:31.954152 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 41784 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:14:31.955327 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:31.958744 systemd-logind[1425]: New session 3 of user core. Jan 29 11:14:31.964903 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:14:32.013193 sshd[1562]: Connection closed by 10.0.0.1 port 41784 Jan 29 11:14:32.013705 sshd-session[1560]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:32.022387 systemd[1]: sshd@2-10.0.0.125:22-10.0.0.1:41784.service: Deactivated successfully. Jan 29 11:14:32.025892 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:14:32.027278 systemd-logind[1425]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:14:32.028470 systemd[1]: Started sshd@3-10.0.0.125:22-10.0.0.1:41790.service - OpenSSH per-connection server daemon (10.0.0.1:41790). Jan 29 11:14:32.029364 systemd-logind[1425]: Removed session 3. Jan 29 11:14:32.067411 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 41790 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:14:32.068698 sshd-session[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:32.073157 systemd-logind[1425]: New session 4 of user core. Jan 29 11:14:32.079926 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:14:32.131678 sshd[1569]: Connection closed by 10.0.0.1 port 41790 Jan 29 11:14:32.131992 sshd-session[1567]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:32.151021 systemd[1]: sshd@3-10.0.0.125:22-10.0.0.1:41790.service: Deactivated successfully. Jan 29 11:14:32.152428 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:14:32.153661 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:14:32.154717 systemd[1]: Started sshd@4-10.0.0.125:22-10.0.0.1:41798.service - OpenSSH per-connection server daemon (10.0.0.1:41798). Jan 29 11:14:32.155402 systemd-logind[1425]: Removed session 4. Jan 29 11:14:32.192353 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 41798 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:14:32.193602 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:32.197393 systemd-logind[1425]: New session 5 of user core. Jan 29 11:14:32.215926 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:14:32.297164 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:14:32.297442 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:14:32.641953 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:14:32.642086 (dockerd)[1597]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:14:32.890886 dockerd[1597]: time="2025-01-29T11:14:32.890828154Z" level=info msg="Starting up" Jan 29 11:14:33.031201 dockerd[1597]: time="2025-01-29T11:14:33.031099028Z" level=info msg="Loading containers: start." Jan 29 11:14:33.180884 kernel: Initializing XFRM netlink socket Jan 29 11:14:33.259084 systemd-networkd[1384]: docker0: Link UP Jan 29 11:14:33.294199 dockerd[1597]: time="2025-01-29T11:14:33.294073756Z" level=info msg="Loading containers: done." Jan 29 11:14:33.308320 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4080807557-merged.mount: Deactivated successfully. Jan 29 11:14:33.309984 dockerd[1597]: time="2025-01-29T11:14:33.309567364Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:14:33.309984 dockerd[1597]: time="2025-01-29T11:14:33.309669349Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 11:14:33.309984 dockerd[1597]: time="2025-01-29T11:14:33.309795211Z" level=info msg="Daemon has completed initialization" Jan 29 11:14:33.340207 dockerd[1597]: time="2025-01-29T11:14:33.339635057Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:14:33.339820 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:14:33.875388 containerd[1444]: time="2025-01-29T11:14:33.875333908Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 11:14:34.608405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount225127831.mount: Deactivated successfully. Jan 29 11:14:35.965147 containerd[1444]: time="2025-01-29T11:14:35.965099464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:35.966125 containerd[1444]: time="2025-01-29T11:14:35.965509395Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618072" Jan 29 11:14:35.966668 containerd[1444]: time="2025-01-29T11:14:35.966637481Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:35.969767 containerd[1444]: time="2025-01-29T11:14:35.969684411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:35.971711 containerd[1444]: time="2025-01-29T11:14:35.971675376Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 2.096301339s" Jan 29 11:14:35.971765 containerd[1444]: time="2025-01-29T11:14:35.971714049Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 29 11:14:35.972355 containerd[1444]: time="2025-01-29T11:14:35.972328243Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 11:14:37.550556 containerd[1444]: time="2025-01-29T11:14:37.550496262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:37.550969 containerd[1444]: time="2025-01-29T11:14:37.550919807Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469469" Jan 29 11:14:37.551987 containerd[1444]: time="2025-01-29T11:14:37.551926630Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:37.554911 containerd[1444]: time="2025-01-29T11:14:37.554881807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:37.556227 containerd[1444]: time="2025-01-29T11:14:37.556170832Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 1.583811724s" Jan 29 11:14:37.556227 containerd[1444]: time="2025-01-29T11:14:37.556216929Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 29 11:14:37.556940 containerd[1444]: time="2025-01-29T11:14:37.556634893Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 11:14:37.660204 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:14:37.669932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:14:37.763911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:14:37.767953 (kubelet)[1859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:14:37.803927 kubelet[1859]: E0129 11:14:37.803782 1859 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:14:37.806895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:14:37.807048 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:14:38.989247 containerd[1444]: time="2025-01-29T11:14:38.989184504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:38.989624 containerd[1444]: time="2025-01-29T11:14:38.989562212Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024219" Jan 29 11:14:38.990559 containerd[1444]: time="2025-01-29T11:14:38.990485473Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:38.993620 containerd[1444]: time="2025-01-29T11:14:38.993559357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:38.994825 containerd[1444]: time="2025-01-29T11:14:38.994768979Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.438104978s" Jan 29 11:14:38.994825 containerd[1444]: time="2025-01-29T11:14:38.994799120Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 29 11:14:38.995995 containerd[1444]: time="2025-01-29T11:14:38.995954521Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:14:40.183510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount347773672.mount: Deactivated successfully. Jan 29 11:14:40.392638 containerd[1444]: time="2025-01-29T11:14:40.392583948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:40.393362 containerd[1444]: time="2025-01-29T11:14:40.393300753Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772119" Jan 29 11:14:40.394228 containerd[1444]: time="2025-01-29T11:14:40.394188237Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:40.396577 containerd[1444]: time="2025-01-29T11:14:40.396523286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:40.397179 containerd[1444]: time="2025-01-29T11:14:40.397051525Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.401066508s" Jan 29 11:14:40.397179 containerd[1444]: time="2025-01-29T11:14:40.397078835Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 29 11:14:40.397568 containerd[1444]: time="2025-01-29T11:14:40.397536814Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:14:41.131780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount753100033.mount: Deactivated successfully. Jan 29 11:14:42.065950 containerd[1444]: time="2025-01-29T11:14:42.065899373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:42.067038 containerd[1444]: time="2025-01-29T11:14:42.066534825Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 29 11:14:42.070937 containerd[1444]: time="2025-01-29T11:14:42.070896504Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:42.074890 containerd[1444]: time="2025-01-29T11:14:42.074861080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:42.076012 containerd[1444]: time="2025-01-29T11:14:42.075977561Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.67840867s" Jan 29 11:14:42.076119 containerd[1444]: time="2025-01-29T11:14:42.076018201Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 11:14:42.076625 containerd[1444]: time="2025-01-29T11:14:42.076584718Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:14:42.647338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4020337174.mount: Deactivated successfully. Jan 29 11:14:42.689375 containerd[1444]: time="2025-01-29T11:14:42.689053903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:42.690129 containerd[1444]: time="2025-01-29T11:14:42.689858730Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 29 11:14:42.691052 containerd[1444]: time="2025-01-29T11:14:42.691011242Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:42.693513 containerd[1444]: time="2025-01-29T11:14:42.693434218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:42.694353 containerd[1444]: time="2025-01-29T11:14:42.694183215Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 617.560141ms" Jan 29 11:14:42.694353 containerd[1444]: time="2025-01-29T11:14:42.694219166Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 11:14:42.695124 containerd[1444]: time="2025-01-29T11:14:42.694967481Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 11:14:43.335797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4282329047.mount: Deactivated successfully. Jan 29 11:14:45.792782 containerd[1444]: time="2025-01-29T11:14:45.792714866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:45.793755 containerd[1444]: time="2025-01-29T11:14:45.793555256Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Jan 29 11:14:45.794517 containerd[1444]: time="2025-01-29T11:14:45.794485886Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:45.801066 containerd[1444]: time="2025-01-29T11:14:45.800991724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:45.802328 containerd[1444]: time="2025-01-29T11:14:45.802230401Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.107226454s" Jan 29 11:14:45.802328 containerd[1444]: time="2025-01-29T11:14:45.802271335Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 29 11:14:48.021444 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:14:48.028925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:14:48.127697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:14:48.132611 (kubelet)[2013]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:14:48.171949 kubelet[2013]: E0129 11:14:48.171884 2013 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:14:48.174638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:14:48.174805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:14:49.801476 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:14:49.815986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:14:49.837078 systemd[1]: Reloading requested from client PID 2029 ('systemctl') (unit session-5.scope)... Jan 29 11:14:49.837097 systemd[1]: Reloading... Jan 29 11:14:49.904767 zram_generator::config[2068]: No configuration found. Jan 29 11:14:50.027537 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:14:50.079276 systemd[1]: Reloading finished in 241 ms. Jan 29 11:14:50.128789 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:14:50.128880 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:14:50.129102 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:14:50.131869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:14:50.232452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:14:50.237714 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:14:50.279146 kubelet[2114]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:14:50.279146 kubelet[2114]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:14:50.279146 kubelet[2114]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:14:50.279513 kubelet[2114]: I0129 11:14:50.279262 2114 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:14:51.145203 kubelet[2114]: I0129 11:14:51.145150 2114 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:14:51.145203 kubelet[2114]: I0129 11:14:51.145192 2114 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:14:51.145474 kubelet[2114]: I0129 11:14:51.145445 2114 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:14:51.207540 kubelet[2114]: E0129 11:14:51.207497 2114 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:14:51.207655 kubelet[2114]: I0129 11:14:51.207579 2114 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:14:51.214378 kubelet[2114]: E0129 11:14:51.214340 2114 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:14:51.214378 kubelet[2114]: I0129 11:14:51.214371 2114 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:14:51.217756 kubelet[2114]: I0129 11:14:51.217698 2114 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:14:51.217982 kubelet[2114]: I0129 11:14:51.217957 2114 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:14:51.218107 kubelet[2114]: I0129 11:14:51.218071 2114 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:14:51.218294 kubelet[2114]: I0129 11:14:51.218100 2114 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:14:51.218370 kubelet[2114]: I0129 11:14:51.218297 2114 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:14:51.218370 kubelet[2114]: I0129 11:14:51.218306 2114 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:14:51.218519 kubelet[2114]: I0129 11:14:51.218494 2114 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:14:51.220155 kubelet[2114]: I0129 11:14:51.220128 2114 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:14:51.220187 kubelet[2114]: I0129 11:14:51.220159 2114 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:14:51.220262 kubelet[2114]: I0129 11:14:51.220245 2114 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:14:51.220293 kubelet[2114]: I0129 11:14:51.220261 2114 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:14:51.223241 kubelet[2114]: W0129 11:14:51.223182 2114 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jan 29 11:14:51.223279 kubelet[2114]: E0129 11:14:51.223251 2114 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:14:51.223848 kubelet[2114]: I0129 11:14:51.223422 2114 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:14:51.225247 kubelet[2114]: W0129 11:14:51.225199 2114 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jan 29 11:14:51.225369 kubelet[2114]: E0129 11:14:51.225351 2114 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:14:51.228127 kubelet[2114]: I0129 11:14:51.228105 2114 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:14:51.228859 kubelet[2114]: W0129 11:14:51.228831 2114 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:14:51.229754 kubelet[2114]: I0129 11:14:51.229559 2114 server.go:1269] "Started kubelet" Jan 29 11:14:51.230783 kubelet[2114]: I0129 11:14:51.230758 2114 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:14:51.232069 kubelet[2114]: I0129 11:14:51.232023 2114 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:14:51.233093 kubelet[2114]: I0129 11:14:51.233065 2114 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:14:51.235685 kubelet[2114]: I0129 11:14:51.233805 2114 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:14:51.235685 kubelet[2114]: I0129 11:14:51.233991 2114 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:14:51.235685 kubelet[2114]: I0129 11:14:51.234045 2114 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:14:51.235685 kubelet[2114]: I0129 11:14:51.234469 2114 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:14:51.235685 kubelet[2114]: I0129 11:14:51.234596 2114 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:14:51.235685 kubelet[2114]: I0129 11:14:51.234666 2114 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:14:51.235685 kubelet[2114]: E0129 11:14:51.234879 2114 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:14:51.235685 kubelet[2114]: E0129 11:14:51.234960 2114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="200ms" Jan 29 11:14:51.235685 kubelet[2114]: W0129 11:14:51.234969 2114 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jan 29 11:14:51.235685 kubelet[2114]: E0129 11:14:51.235018 2114 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:14:51.235685 kubelet[2114]: I0129 11:14:51.235207 2114 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:14:51.235984 kubelet[2114]: I0129 11:14:51.235283 2114 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:14:51.235984 kubelet[2114]: E0129 11:14:51.235625 2114 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:14:51.236801 kubelet[2114]: I0129 11:14:51.236776 2114 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:14:51.237188 kubelet[2114]: E0129 11:14:51.236116 2114 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.125:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.125:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f25923e554d3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:14:51.229531454 +0000 UTC m=+0.988328773,LastTimestamp:2025-01-29 11:14:51.229531454 +0000 UTC m=+0.988328773,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:14:51.248514 kubelet[2114]: I0129 11:14:51.248459 2114 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:14:51.250829 kubelet[2114]: I0129 11:14:51.250785 2114 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:14:51.250829 kubelet[2114]: I0129 11:14:51.250818 2114 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:14:51.250983 kubelet[2114]: I0129 11:14:51.250840 2114 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:14:51.250983 kubelet[2114]: E0129 11:14:51.250891 2114 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:14:51.252229 kubelet[2114]: W0129 11:14:51.252166 2114 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jan 29 11:14:51.252287 kubelet[2114]: E0129 11:14:51.252247 2114 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:14:51.254412 kubelet[2114]: I0129 11:14:51.254161 2114 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:14:51.254412 kubelet[2114]: I0129 11:14:51.254177 2114 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:14:51.254412 kubelet[2114]: I0129 11:14:51.254196 2114 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:14:51.335099 kubelet[2114]: E0129 11:14:51.335056 2114 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:14:51.351284 kubelet[2114]: E0129 11:14:51.351239 2114 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:14:51.359412 kubelet[2114]: I0129 11:14:51.359366 2114 policy_none.go:49] "None policy: Start" Jan 29 11:14:51.360242 kubelet[2114]: I0129 11:14:51.360222 2114 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:14:51.360321 kubelet[2114]: I0129 11:14:51.360254 2114 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:14:51.365845 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:14:51.380693 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:14:51.383588 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:14:51.391755 kubelet[2114]: I0129 11:14:51.391497 2114 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:14:51.391870 kubelet[2114]: I0129 11:14:51.391793 2114 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:14:51.391870 kubelet[2114]: I0129 11:14:51.391812 2114 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:14:51.392082 kubelet[2114]: I0129 11:14:51.392055 2114 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:14:51.393132 kubelet[2114]: E0129 11:14:51.393105 2114 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:14:51.435500 kubelet[2114]: E0129 11:14:51.435367 2114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="400ms" Jan 29 11:14:51.493704 kubelet[2114]: I0129 11:14:51.493611 2114 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:14:51.494227 kubelet[2114]: E0129 11:14:51.494183 2114 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Jan 29 11:14:51.558925 systemd[1]: Created slice kubepods-burstable-podcdce2dbc8e911bf5b7266d60d3a7bd59.slice - libcontainer container kubepods-burstable-podcdce2dbc8e911bf5b7266d60d3a7bd59.slice. Jan 29 11:14:51.568395 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 29 11:14:51.571636 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 29 11:14:51.635552 kubelet[2114]: I0129 11:14:51.635429 2114 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdce2dbc8e911bf5b7266d60d3a7bd59-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cdce2dbc8e911bf5b7266d60d3a7bd59\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:14:51.635552 kubelet[2114]: I0129 11:14:51.635467 2114 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:51.635552 kubelet[2114]: I0129 11:14:51.635489 2114 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:51.635552 kubelet[2114]: I0129 11:14:51.635506 2114 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:14:51.635552 kubelet[2114]: I0129 11:14:51.635521 2114 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cdce2dbc8e911bf5b7266d60d3a7bd59-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cdce2dbc8e911bf5b7266d60d3a7bd59\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:14:51.635885 kubelet[2114]: I0129 11:14:51.635831 2114 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdce2dbc8e911bf5b7266d60d3a7bd59-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cdce2dbc8e911bf5b7266d60d3a7bd59\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:14:51.635885 kubelet[2114]: I0129 11:14:51.635879 2114 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:51.635973 kubelet[2114]: I0129 11:14:51.635898 2114 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:51.635973 kubelet[2114]: I0129 11:14:51.635916 2114 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:51.696837 kubelet[2114]: I0129 11:14:51.696299 2114 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:14:51.696837 kubelet[2114]: E0129 11:14:51.696630 2114 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Jan 29 11:14:51.836087 kubelet[2114]: E0129 11:14:51.836043 2114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="800ms" Jan 29 11:14:51.867353 kubelet[2114]: E0129 11:14:51.867324 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:51.868133 containerd[1444]: time="2025-01-29T11:14:51.868097386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cdce2dbc8e911bf5b7266d60d3a7bd59,Namespace:kube-system,Attempt:0,}" Jan 29 11:14:51.871543 kubelet[2114]: E0129 11:14:51.871284 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:51.871815 containerd[1444]: time="2025-01-29T11:14:51.871770006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 29 11:14:51.873489 kubelet[2114]: E0129 11:14:51.873419 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:51.873877 containerd[1444]: time="2025-01-29T11:14:51.873828828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 29 11:14:52.090387 kubelet[2114]: W0129 11:14:52.090272 2114 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jan 29 11:14:52.090387 kubelet[2114]: E0129 11:14:52.090347 2114 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.125:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:14:52.098663 kubelet[2114]: I0129 11:14:52.098623 2114 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:14:52.099016 kubelet[2114]: E0129 11:14:52.098981 2114 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Jan 29 11:14:52.346232 kubelet[2114]: W0129 11:14:52.346108 2114 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jan 29 11:14:52.346232 kubelet[2114]: E0129 11:14:52.346154 2114 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:14:52.414614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3634618688.mount: Deactivated successfully. Jan 29 11:14:52.419714 containerd[1444]: time="2025-01-29T11:14:52.419641354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:14:52.421260 containerd[1444]: time="2025-01-29T11:14:52.421181153Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:14:52.422481 containerd[1444]: time="2025-01-29T11:14:52.422434244Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 11:14:52.423205 containerd[1444]: time="2025-01-29T11:14:52.423153338Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:14:52.426450 containerd[1444]: time="2025-01-29T11:14:52.426388258Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:14:52.427672 containerd[1444]: time="2025-01-29T11:14:52.427596726Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:14:52.430697 containerd[1444]: time="2025-01-29T11:14:52.430601886Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:14:52.432699 containerd[1444]: time="2025-01-29T11:14:52.432201717Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 564.018882ms" Jan 29 11:14:52.433146 containerd[1444]: time="2025-01-29T11:14:52.433114592Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 559.207278ms" Jan 29 11:14:52.433462 containerd[1444]: time="2025-01-29T11:14:52.433434318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:14:52.435013 containerd[1444]: time="2025-01-29T11:14:52.434979320Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.128667ms" Jan 29 11:14:52.588758 containerd[1444]: time="2025-01-29T11:14:52.588593905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:52.588946 containerd[1444]: time="2025-01-29T11:14:52.588751147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:52.588946 containerd[1444]: time="2025-01-29T11:14:52.588771477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:52.588946 containerd[1444]: time="2025-01-29T11:14:52.588913391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:52.589889 containerd[1444]: time="2025-01-29T11:14:52.589464597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:52.589889 containerd[1444]: time="2025-01-29T11:14:52.589537475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:52.589889 containerd[1444]: time="2025-01-29T11:14:52.589553803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:52.589889 containerd[1444]: time="2025-01-29T11:14:52.589631444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:52.590814 containerd[1444]: time="2025-01-29T11:14:52.590741620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:52.590882 containerd[1444]: time="2025-01-29T11:14:52.590794688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:52.590882 containerd[1444]: time="2025-01-29T11:14:52.590846035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:52.590983 containerd[1444]: time="2025-01-29T11:14:52.590949688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:52.619948 systemd[1]: Started cri-containerd-4226d8333321351175abc25f55032ced92b739b0ebdddc23ef950b00f7f280eb.scope - libcontainer container 4226d8333321351175abc25f55032ced92b739b0ebdddc23ef950b00f7f280eb. Jan 29 11:14:52.621291 systemd[1]: Started cri-containerd-897b52e7828190b9464c424f6284e6568e43397e30374a721c883d641c9a0168.scope - libcontainer container 897b52e7828190b9464c424f6284e6568e43397e30374a721c883d641c9a0168. Jan 29 11:14:52.622571 systemd[1]: Started cri-containerd-bab53ea46822cefd32189f7737d85e859d5cd56ff603a835e4546f6692bcae5a.scope - libcontainer container bab53ea46822cefd32189f7737d85e859d5cd56ff603a835e4546f6692bcae5a. Jan 29 11:14:52.636887 kubelet[2114]: E0129 11:14:52.636839 2114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="1.6s" Jan 29 11:14:52.660110 containerd[1444]: time="2025-01-29T11:14:52.660067787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"4226d8333321351175abc25f55032ced92b739b0ebdddc23ef950b00f7f280eb\"" Jan 29 11:14:52.660595 containerd[1444]: time="2025-01-29T11:14:52.660565085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cdce2dbc8e911bf5b7266d60d3a7bd59,Namespace:kube-system,Attempt:0,} returns sandbox id \"bab53ea46822cefd32189f7737d85e859d5cd56ff603a835e4546f6692bcae5a\"" Jan 29 11:14:52.661708 containerd[1444]: time="2025-01-29T11:14:52.661512418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"897b52e7828190b9464c424f6284e6568e43397e30374a721c883d641c9a0168\"" Jan 29 11:14:52.662211 kubelet[2114]: E0129 11:14:52.662189 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:52.662345 kubelet[2114]: E0129 11:14:52.662191 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:52.662439 kubelet[2114]: E0129 11:14:52.662194 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:52.664446 containerd[1444]: time="2025-01-29T11:14:52.664415685Z" level=info msg="CreateContainer within sandbox \"897b52e7828190b9464c424f6284e6568e43397e30374a721c883d641c9a0168\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:14:52.664641 containerd[1444]: time="2025-01-29T11:14:52.664443380Z" level=info msg="CreateContainer within sandbox \"4226d8333321351175abc25f55032ced92b739b0ebdddc23ef950b00f7f280eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:14:52.664752 containerd[1444]: time="2025-01-29T11:14:52.664447022Z" level=info msg="CreateContainer within sandbox \"bab53ea46822cefd32189f7737d85e859d5cd56ff603a835e4546f6692bcae5a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:14:52.682696 containerd[1444]: time="2025-01-29T11:14:52.682623582Z" level=info msg="CreateContainer within sandbox \"4226d8333321351175abc25f55032ced92b739b0ebdddc23ef950b00f7f280eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fc68d90ba8886f5e48483ca3a227a64197817d2ed5f8b5b8de8f13bdb6d2616c\"" Jan 29 11:14:52.683348 containerd[1444]: time="2025-01-29T11:14:52.683318823Z" level=info msg="StartContainer for \"fc68d90ba8886f5e48483ca3a227a64197817d2ed5f8b5b8de8f13bdb6d2616c\"" Jan 29 11:14:52.684940 containerd[1444]: time="2025-01-29T11:14:52.684897924Z" level=info msg="CreateContainer within sandbox \"bab53ea46822cefd32189f7737d85e859d5cd56ff603a835e4546f6692bcae5a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"74abd9921515fee552ce1987ee3498a7442999e8c3bafb6f8e9516a2055dfe13\"" Jan 29 11:14:52.685364 containerd[1444]: time="2025-01-29T11:14:52.685340273Z" level=info msg="StartContainer for \"74abd9921515fee552ce1987ee3498a7442999e8c3bafb6f8e9516a2055dfe13\"" Jan 29 11:14:52.686403 containerd[1444]: time="2025-01-29T11:14:52.686351919Z" level=info msg="CreateContainer within sandbox \"897b52e7828190b9464c424f6284e6568e43397e30374a721c883d641c9a0168\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9ed89e55634d8a8f1bb2a0988d63539215ac22a375dc5e14f433e613debf1de0\"" Jan 29 11:14:52.688356 containerd[1444]: time="2025-01-29T11:14:52.688302292Z" level=info msg="StartContainer for \"9ed89e55634d8a8f1bb2a0988d63539215ac22a375dc5e14f433e613debf1de0\"" Jan 29 11:14:52.712892 systemd[1]: Started cri-containerd-74abd9921515fee552ce1987ee3498a7442999e8c3bafb6f8e9516a2055dfe13.scope - libcontainer container 74abd9921515fee552ce1987ee3498a7442999e8c3bafb6f8e9516a2055dfe13. Jan 29 11:14:52.713863 systemd[1]: Started cri-containerd-fc68d90ba8886f5e48483ca3a227a64197817d2ed5f8b5b8de8f13bdb6d2616c.scope - libcontainer container fc68d90ba8886f5e48483ca3a227a64197817d2ed5f8b5b8de8f13bdb6d2616c. Jan 29 11:14:52.717073 systemd[1]: Started cri-containerd-9ed89e55634d8a8f1bb2a0988d63539215ac22a375dc5e14f433e613debf1de0.scope - libcontainer container 9ed89e55634d8a8f1bb2a0988d63539215ac22a375dc5e14f433e613debf1de0. Jan 29 11:14:52.752449 kubelet[2114]: W0129 11:14:52.752311 2114 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jan 29 11:14:52.752449 kubelet[2114]: E0129 11:14:52.752399 2114 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:14:52.753524 containerd[1444]: time="2025-01-29T11:14:52.753012981Z" level=info msg="StartContainer for \"74abd9921515fee552ce1987ee3498a7442999e8c3bafb6f8e9516a2055dfe13\" returns successfully" Jan 29 11:14:52.801753 containerd[1444]: time="2025-01-29T11:14:52.798162831Z" level=info msg="StartContainer for \"fc68d90ba8886f5e48483ca3a227a64197817d2ed5f8b5b8de8f13bdb6d2616c\" returns successfully" Jan 29 11:14:52.801753 containerd[1444]: time="2025-01-29T11:14:52.798254839Z" level=info msg="StartContainer for \"9ed89e55634d8a8f1bb2a0988d63539215ac22a375dc5e14f433e613debf1de0\" returns successfully" Jan 29 11:14:52.813708 kubelet[2114]: W0129 11:14:52.813608 2114 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.125:6443: connect: connection refused Jan 29 11:14:52.813708 kubelet[2114]: E0129 11:14:52.813676 2114 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:14:52.900774 kubelet[2114]: I0129 11:14:52.900662 2114 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:14:52.901002 kubelet[2114]: E0129 11:14:52.900967 2114 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Jan 29 11:14:53.261815 kubelet[2114]: E0129 11:14:53.260428 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:53.263711 kubelet[2114]: E0129 11:14:53.263687 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:53.265014 kubelet[2114]: E0129 11:14:53.264994 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:54.267299 kubelet[2114]: E0129 11:14:54.267230 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:54.503447 kubelet[2114]: I0129 11:14:54.503144 2114 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:14:54.585025 kubelet[2114]: E0129 11:14:54.584969 2114 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:14:54.663030 kubelet[2114]: I0129 11:14:54.662873 2114 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:14:54.663030 kubelet[2114]: E0129 11:14:54.662912 2114 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 29 11:14:54.744473 kubelet[2114]: E0129 11:14:54.744421 2114 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:54.744648 kubelet[2114]: E0129 11:14:54.744628 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:55.225922 kubelet[2114]: I0129 11:14:55.225882 2114 apiserver.go:52] "Watching apiserver" Jan 29 11:14:55.235513 kubelet[2114]: I0129 11:14:55.235477 2114 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:14:55.630815 kubelet[2114]: E0129 11:14:55.630773 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:56.269530 kubelet[2114]: E0129 11:14:56.269501 2114 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:56.695653 systemd[1]: Reloading requested from client PID 2392 ('systemctl') (unit session-5.scope)... Jan 29 11:14:56.695670 systemd[1]: Reloading... Jan 29 11:14:56.761765 zram_generator::config[2432]: No configuration found. Jan 29 11:14:56.845625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:14:56.907945 systemd[1]: Reloading finished in 211 ms. Jan 29 11:14:56.943714 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:14:56.958279 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:14:56.958464 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:14:56.958505 systemd[1]: kubelet.service: Consumed 1.387s CPU time, 119.2M memory peak, 0B memory swap peak. Jan 29 11:14:56.968975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:14:57.063186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:14:57.067719 (kubelet)[2473]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:14:57.109428 kubelet[2473]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:14:57.109428 kubelet[2473]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:14:57.109428 kubelet[2473]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:14:57.109824 kubelet[2473]: I0129 11:14:57.109487 2473 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:14:57.116659 kubelet[2473]: I0129 11:14:57.116603 2473 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:14:57.116659 kubelet[2473]: I0129 11:14:57.116639 2473 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:14:57.117244 kubelet[2473]: I0129 11:14:57.117202 2473 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:14:57.118664 kubelet[2473]: I0129 11:14:57.118638 2473 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:14:57.120769 kubelet[2473]: I0129 11:14:57.120689 2473 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:14:57.124866 kubelet[2473]: E0129 11:14:57.123876 2473 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:14:57.124866 kubelet[2473]: I0129 11:14:57.123904 2473 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:14:57.126520 kubelet[2473]: I0129 11:14:57.126493 2473 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:14:57.126624 kubelet[2473]: I0129 11:14:57.126611 2473 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:14:57.126764 kubelet[2473]: I0129 11:14:57.126714 2473 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:14:57.127007 kubelet[2473]: I0129 11:14:57.126764 2473 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:14:57.127096 kubelet[2473]: I0129 11:14:57.127010 2473 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:14:57.127096 kubelet[2473]: I0129 11:14:57.127020 2473 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:14:57.127096 kubelet[2473]: I0129 11:14:57.127057 2473 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:14:57.128023 kubelet[2473]: I0129 11:14:57.127177 2473 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:14:57.128023 kubelet[2473]: I0129 11:14:57.127196 2473 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:14:57.128023 kubelet[2473]: I0129 11:14:57.127223 2473 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:14:57.128023 kubelet[2473]: I0129 11:14:57.127241 2473 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:14:57.128409 kubelet[2473]: I0129 11:14:57.128386 2473 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:14:57.128915 kubelet[2473]: I0129 11:14:57.128899 2473 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:14:57.129298 kubelet[2473]: I0129 11:14:57.129282 2473 server.go:1269] "Started kubelet" Jan 29 11:14:57.131338 kubelet[2473]: I0129 11:14:57.131186 2473 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:14:57.133290 kubelet[2473]: I0129 11:14:57.133257 2473 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:14:57.134125 kubelet[2473]: I0129 11:14:57.133487 2473 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:14:57.134125 kubelet[2473]: I0129 11:14:57.133956 2473 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:14:57.145051 kubelet[2473]: I0129 11:14:57.142706 2473 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:14:57.145051 kubelet[2473]: I0129 11:14:57.144569 2473 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:14:57.146602 kubelet[2473]: I0129 11:14:57.146582 2473 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:14:57.150155 kubelet[2473]: I0129 11:14:57.150124 2473 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:14:57.150438 kubelet[2473]: I0129 11:14:57.150416 2473 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:14:57.151003 kubelet[2473]: I0129 11:14:57.150718 2473 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:14:57.151139 kubelet[2473]: I0129 11:14:57.151072 2473 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:14:57.151492 kubelet[2473]: E0129 11:14:57.151465 2473 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:14:57.152052 kubelet[2473]: I0129 11:14:57.151867 2473 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:14:57.153825 kubelet[2473]: I0129 11:14:57.153714 2473 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:14:57.153884 kubelet[2473]: I0129 11:14:57.153844 2473 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:14:57.153884 kubelet[2473]: I0129 11:14:57.153869 2473 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:14:57.153960 kubelet[2473]: E0129 11:14:57.153926 2473 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:14:57.155184 kubelet[2473]: I0129 11:14:57.155075 2473 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:14:57.184417 kubelet[2473]: I0129 11:14:57.184385 2473 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:14:57.184417 kubelet[2473]: I0129 11:14:57.184411 2473 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:14:57.184538 kubelet[2473]: I0129 11:14:57.184434 2473 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:14:57.184613 kubelet[2473]: I0129 11:14:57.184595 2473 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:14:57.184696 kubelet[2473]: I0129 11:14:57.184611 2473 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:14:57.184696 kubelet[2473]: I0129 11:14:57.184629 2473 policy_none.go:49] "None policy: Start" Jan 29 11:14:57.185305 kubelet[2473]: I0129 11:14:57.185286 2473 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:14:57.185360 kubelet[2473]: I0129 11:14:57.185316 2473 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:14:57.185485 kubelet[2473]: I0129 11:14:57.185469 2473 state_mem.go:75] "Updated machine memory state" Jan 29 11:14:57.188876 kubelet[2473]: I0129 11:14:57.188853 2473 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:14:57.189216 kubelet[2473]: I0129 11:14:57.189014 2473 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:14:57.189216 kubelet[2473]: I0129 11:14:57.189031 2473 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:14:57.189371 kubelet[2473]: I0129 11:14:57.189270 2473 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:14:57.261990 kubelet[2473]: E0129 11:14:57.261875 2473 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:14:57.297198 kubelet[2473]: I0129 11:14:57.297104 2473 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:14:57.303930 kubelet[2473]: I0129 11:14:57.303894 2473 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 29 11:14:57.304074 kubelet[2473]: I0129 11:14:57.303993 2473 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:14:57.453098 kubelet[2473]: I0129 11:14:57.452972 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdce2dbc8e911bf5b7266d60d3a7bd59-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cdce2dbc8e911bf5b7266d60d3a7bd59\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:14:57.453098 kubelet[2473]: I0129 11:14:57.453013 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:57.453098 kubelet[2473]: I0129 11:14:57.453034 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:57.453098 kubelet[2473]: I0129 11:14:57.453052 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:57.453098 kubelet[2473]: I0129 11:14:57.453069 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:57.453344 kubelet[2473]: I0129 11:14:57.453085 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cdce2dbc8e911bf5b7266d60d3a7bd59-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cdce2dbc8e911bf5b7266d60d3a7bd59\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:14:57.453344 kubelet[2473]: I0129 11:14:57.453099 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdce2dbc8e911bf5b7266d60d3a7bd59-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cdce2dbc8e911bf5b7266d60d3a7bd59\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:14:57.453344 kubelet[2473]: I0129 11:14:57.453115 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:14:57.453344 kubelet[2473]: I0129 11:14:57.453130 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:14:57.560557 kubelet[2473]: E0129 11:14:57.560468 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:57.560706 kubelet[2473]: E0129 11:14:57.560672 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:57.562843 kubelet[2473]: E0129 11:14:57.562815 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:58.128519 kubelet[2473]: I0129 11:14:58.128485 2473 apiserver.go:52] "Watching apiserver" Jan 29 11:14:58.152543 kubelet[2473]: I0129 11:14:58.152496 2473 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:14:58.167879 kubelet[2473]: E0129 11:14:58.167800 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:58.168120 kubelet[2473]: E0129 11:14:58.168080 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:58.174425 kubelet[2473]: E0129 11:14:58.174361 2473 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:14:58.174574 kubelet[2473]: E0129 11:14:58.174551 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:58.190668 kubelet[2473]: I0129 11:14:58.190424 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.190407313 podStartE2EDuration="1.190407313s" podCreationTimestamp="2025-01-29 11:14:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:14:58.190331975 +0000 UTC m=+1.119478621" watchObservedRunningTime="2025-01-29 11:14:58.190407313 +0000 UTC m=+1.119553959" Jan 29 11:14:58.208640 kubelet[2473]: I0129 11:14:58.208486 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.208467341 podStartE2EDuration="1.208467341s" podCreationTimestamp="2025-01-29 11:14:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:14:58.200005904 +0000 UTC m=+1.129152550" watchObservedRunningTime="2025-01-29 11:14:58.208467341 +0000 UTC m=+1.137613987" Jan 29 11:14:58.220743 kubelet[2473]: I0129 11:14:58.219787 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.21976418 podStartE2EDuration="3.21976418s" podCreationTimestamp="2025-01-29 11:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:14:58.208903365 +0000 UTC m=+1.138050012" watchObservedRunningTime="2025-01-29 11:14:58.21976418 +0000 UTC m=+1.148910866" Jan 29 11:14:58.595458 sudo[1577]: pam_unix(sudo:session): session closed for user root Jan 29 11:14:58.598776 sshd[1576]: Connection closed by 10.0.0.1 port 41798 Jan 29 11:14:58.599040 sshd-session[1574]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:58.601674 systemd[1]: sshd@4-10.0.0.125:22-10.0.0.1:41798.service: Deactivated successfully. Jan 29 11:14:58.603412 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:14:58.603647 systemd[1]: session-5.scope: Consumed 5.260s CPU time, 157.7M memory peak, 0B memory swap peak. Jan 29 11:14:58.605802 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:14:58.606947 systemd-logind[1425]: Removed session 5. Jan 29 11:14:59.169542 kubelet[2473]: E0129 11:14:59.169504 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:00.170882 kubelet[2473]: E0129 11:15:00.170786 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:03.854123 kubelet[2473]: I0129 11:15:03.854087 2473 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:15:03.854534 containerd[1444]: time="2025-01-29T11:15:03.854478253Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:15:03.855496 kubelet[2473]: I0129 11:15:03.855004 2473 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:15:04.543383 kubelet[2473]: E0129 11:15:04.543129 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:04.804557 systemd[1]: Created slice kubepods-besteffort-pod0120f083_6d0c_4241_bdb9_b2e93be736b0.slice - libcontainer container kubepods-besteffort-pod0120f083_6d0c_4241_bdb9_b2e93be736b0.slice. Jan 29 11:15:04.804980 kubelet[2473]: I0129 11:15:04.804889 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbmk6\" (UniqueName: \"kubernetes.io/projected/0120f083-6d0c-4241-bdb9-b2e93be736b0-kube-api-access-lbmk6\") pod \"kube-proxy-nsndr\" (UID: \"0120f083-6d0c-4241-bdb9-b2e93be736b0\") " pod="kube-system/kube-proxy-nsndr" Jan 29 11:15:04.804980 kubelet[2473]: I0129 11:15:04.804927 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0120f083-6d0c-4241-bdb9-b2e93be736b0-lib-modules\") pod \"kube-proxy-nsndr\" (UID: \"0120f083-6d0c-4241-bdb9-b2e93be736b0\") " pod="kube-system/kube-proxy-nsndr" Jan 29 11:15:04.804980 kubelet[2473]: I0129 11:15:04.804947 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0120f083-6d0c-4241-bdb9-b2e93be736b0-kube-proxy\") pod \"kube-proxy-nsndr\" (UID: \"0120f083-6d0c-4241-bdb9-b2e93be736b0\") " pod="kube-system/kube-proxy-nsndr" Jan 29 11:15:04.804980 kubelet[2473]: I0129 11:15:04.804964 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0120f083-6d0c-4241-bdb9-b2e93be736b0-xtables-lock\") pod \"kube-proxy-nsndr\" (UID: \"0120f083-6d0c-4241-bdb9-b2e93be736b0\") " pod="kube-system/kube-proxy-nsndr" Jan 29 11:15:04.818055 systemd[1]: Created slice kubepods-burstable-podd8b72bca_d65c_44ed_be27_7e6bbca632e6.slice - libcontainer container kubepods-burstable-podd8b72bca_d65c_44ed_be27_7e6bbca632e6.slice. Jan 29 11:15:04.905795 kubelet[2473]: I0129 11:15:04.905690 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/d8b72bca-d65c-44ed-be27-7e6bbca632e6-cni-plugin\") pod \"kube-flannel-ds-ptmnb\" (UID: \"d8b72bca-d65c-44ed-be27-7e6bbca632e6\") " pod="kube-flannel/kube-flannel-ds-ptmnb" Jan 29 11:15:04.905795 kubelet[2473]: I0129 11:15:04.905781 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b4r9\" (UniqueName: \"kubernetes.io/projected/d8b72bca-d65c-44ed-be27-7e6bbca632e6-kube-api-access-7b4r9\") pod \"kube-flannel-ds-ptmnb\" (UID: \"d8b72bca-d65c-44ed-be27-7e6bbca632e6\") " pod="kube-flannel/kube-flannel-ds-ptmnb" Jan 29 11:15:04.906142 kubelet[2473]: I0129 11:15:04.905823 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d8b72bca-d65c-44ed-be27-7e6bbca632e6-run\") pod \"kube-flannel-ds-ptmnb\" (UID: \"d8b72bca-d65c-44ed-be27-7e6bbca632e6\") " pod="kube-flannel/kube-flannel-ds-ptmnb" Jan 29 11:15:04.906142 kubelet[2473]: I0129 11:15:04.905839 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/d8b72bca-d65c-44ed-be27-7e6bbca632e6-flannel-cfg\") pod \"kube-flannel-ds-ptmnb\" (UID: \"d8b72bca-d65c-44ed-be27-7e6bbca632e6\") " pod="kube-flannel/kube-flannel-ds-ptmnb" Jan 29 11:15:04.906142 kubelet[2473]: I0129 11:15:04.905871 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/d8b72bca-d65c-44ed-be27-7e6bbca632e6-cni\") pod \"kube-flannel-ds-ptmnb\" (UID: \"d8b72bca-d65c-44ed-be27-7e6bbca632e6\") " pod="kube-flannel/kube-flannel-ds-ptmnb" Jan 29 11:15:04.906142 kubelet[2473]: I0129 11:15:04.905888 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8b72bca-d65c-44ed-be27-7e6bbca632e6-xtables-lock\") pod \"kube-flannel-ds-ptmnb\" (UID: \"d8b72bca-d65c-44ed-be27-7e6bbca632e6\") " pod="kube-flannel/kube-flannel-ds-ptmnb" Jan 29 11:15:05.116287 kubelet[2473]: E0129 11:15:05.115880 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:05.116451 containerd[1444]: time="2025-01-29T11:15:05.116414507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nsndr,Uid:0120f083-6d0c-4241-bdb9-b2e93be736b0,Namespace:kube-system,Attempt:0,}" Jan 29 11:15:05.123282 kubelet[2473]: E0129 11:15:05.123249 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:05.123722 containerd[1444]: time="2025-01-29T11:15:05.123687052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-ptmnb,Uid:d8b72bca-d65c-44ed-be27-7e6bbca632e6,Namespace:kube-flannel,Attempt:0,}" Jan 29 11:15:05.141584 containerd[1444]: time="2025-01-29T11:15:05.141206307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:15:05.141584 containerd[1444]: time="2025-01-29T11:15:05.141255795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:15:05.141584 containerd[1444]: time="2025-01-29T11:15:05.141272998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:05.141584 containerd[1444]: time="2025-01-29T11:15:05.141341329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:05.148480 containerd[1444]: time="2025-01-29T11:15:05.148071826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:15:05.148480 containerd[1444]: time="2025-01-29T11:15:05.148152439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:15:05.148480 containerd[1444]: time="2025-01-29T11:15:05.148170602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:05.148480 containerd[1444]: time="2025-01-29T11:15:05.148260617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:05.156919 systemd[1]: Started cri-containerd-5e8f72b657e9ae1741962c69001bc7e250bbb7c0efd88ff28af9d0dcde16d166.scope - libcontainer container 5e8f72b657e9ae1741962c69001bc7e250bbb7c0efd88ff28af9d0dcde16d166. Jan 29 11:15:05.162798 systemd[1]: Started cri-containerd-0eed67205f4e285f09122e086239ef784bff697738da5ffd28cc448d77cbfc3e.scope - libcontainer container 0eed67205f4e285f09122e086239ef784bff697738da5ffd28cc448d77cbfc3e. Jan 29 11:15:05.180581 kubelet[2473]: E0129 11:15:05.180455 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:05.180833 containerd[1444]: time="2025-01-29T11:15:05.180737909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nsndr,Uid:0120f083-6d0c-4241-bdb9-b2e93be736b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e8f72b657e9ae1741962c69001bc7e250bbb7c0efd88ff28af9d0dcde16d166\"" Jan 29 11:15:05.181654 kubelet[2473]: E0129 11:15:05.181555 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:05.187579 containerd[1444]: time="2025-01-29T11:15:05.187540858Z" level=info msg="CreateContainer within sandbox \"5e8f72b657e9ae1741962c69001bc7e250bbb7c0efd88ff28af9d0dcde16d166\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:15:05.201163 containerd[1444]: time="2025-01-29T11:15:05.200870550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-ptmnb,Uid:d8b72bca-d65c-44ed-be27-7e6bbca632e6,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"0eed67205f4e285f09122e086239ef784bff697738da5ffd28cc448d77cbfc3e\"" Jan 29 11:15:05.201944 kubelet[2473]: E0129 11:15:05.201576 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:05.202646 containerd[1444]: time="2025-01-29T11:15:05.202621276Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 29 11:15:05.206662 containerd[1444]: time="2025-01-29T11:15:05.206604765Z" level=info msg="CreateContainer within sandbox \"5e8f72b657e9ae1741962c69001bc7e250bbb7c0efd88ff28af9d0dcde16d166\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e19f7aa0bd43bf57141c5f39558f7c68f83bf412b0445d38a0396a14405e9c3e\"" Jan 29 11:15:05.207173 containerd[1444]: time="2025-01-29T11:15:05.207146733Z" level=info msg="StartContainer for \"e19f7aa0bd43bf57141c5f39558f7c68f83bf412b0445d38a0396a14405e9c3e\"" Jan 29 11:15:05.229883 systemd[1]: Started cri-containerd-e19f7aa0bd43bf57141c5f39558f7c68f83bf412b0445d38a0396a14405e9c3e.scope - libcontainer container e19f7aa0bd43bf57141c5f39558f7c68f83bf412b0445d38a0396a14405e9c3e. Jan 29 11:15:05.254743 containerd[1444]: time="2025-01-29T11:15:05.254694602Z" level=info msg="StartContainer for \"e19f7aa0bd43bf57141c5f39558f7c68f83bf412b0445d38a0396a14405e9c3e\" returns successfully" Jan 29 11:15:06.182821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4002279807.mount: Deactivated successfully. Jan 29 11:15:06.184681 kubelet[2473]: E0129 11:15:06.183949 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:06.194307 kubelet[2473]: I0129 11:15:06.194249 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nsndr" podStartSLOduration=2.194232627 podStartE2EDuration="2.194232627s" podCreationTimestamp="2025-01-29 11:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:15:06.194008872 +0000 UTC m=+9.123155518" watchObservedRunningTime="2025-01-29 11:15:06.194232627 +0000 UTC m=+9.123379273" Jan 29 11:15:06.212067 containerd[1444]: time="2025-01-29T11:15:06.212023534Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:06.212930 containerd[1444]: time="2025-01-29T11:15:06.212741365Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Jan 29 11:15:06.213682 containerd[1444]: time="2025-01-29T11:15:06.213642545Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:06.215808 containerd[1444]: time="2025-01-29T11:15:06.215780755Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:06.216746 containerd[1444]: time="2025-01-29T11:15:06.216694816Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.014041936s" Jan 29 11:15:06.216746 containerd[1444]: time="2025-01-29T11:15:06.216732942Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Jan 29 11:15:06.219305 containerd[1444]: time="2025-01-29T11:15:06.219271694Z" level=info msg="CreateContainer within sandbox \"0eed67205f4e285f09122e086239ef784bff697738da5ffd28cc448d77cbfc3e\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 29 11:15:06.227794 containerd[1444]: time="2025-01-29T11:15:06.227753604Z" level=info msg="CreateContainer within sandbox \"0eed67205f4e285f09122e086239ef784bff697738da5ffd28cc448d77cbfc3e\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"c8ec2386a2682ec48e6bdb02dcfd0cdb3d5a7a174bde741f4e5958748b9220da\"" Jan 29 11:15:06.228125 containerd[1444]: time="2025-01-29T11:15:06.228095617Z" level=info msg="StartContainer for \"c8ec2386a2682ec48e6bdb02dcfd0cdb3d5a7a174bde741f4e5958748b9220da\"" Jan 29 11:15:06.261873 systemd[1]: Started cri-containerd-c8ec2386a2682ec48e6bdb02dcfd0cdb3d5a7a174bde741f4e5958748b9220da.scope - libcontainer container c8ec2386a2682ec48e6bdb02dcfd0cdb3d5a7a174bde741f4e5958748b9220da. Jan 29 11:15:06.281712 containerd[1444]: time="2025-01-29T11:15:06.281664531Z" level=info msg="StartContainer for \"c8ec2386a2682ec48e6bdb02dcfd0cdb3d5a7a174bde741f4e5958748b9220da\" returns successfully" Jan 29 11:15:06.290690 systemd[1]: cri-containerd-c8ec2386a2682ec48e6bdb02dcfd0cdb3d5a7a174bde741f4e5958748b9220da.scope: Deactivated successfully. Jan 29 11:15:06.322044 containerd[1444]: time="2025-01-29T11:15:06.321975117Z" level=info msg="shim disconnected" id=c8ec2386a2682ec48e6bdb02dcfd0cdb3d5a7a174bde741f4e5958748b9220da namespace=k8s.io Jan 29 11:15:06.322399 containerd[1444]: time="2025-01-29T11:15:06.322232677Z" level=warning msg="cleaning up after shim disconnected" id=c8ec2386a2682ec48e6bdb02dcfd0cdb3d5a7a174bde741f4e5958748b9220da namespace=k8s.io Jan 29 11:15:06.322399 containerd[1444]: time="2025-01-29T11:15:06.322249159Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:06.986272 kubelet[2473]: E0129 11:15:06.986199 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:07.188032 kubelet[2473]: E0129 11:15:07.187974 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:07.188373 kubelet[2473]: E0129 11:15:07.187977 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:07.188672 containerd[1444]: time="2025-01-29T11:15:07.188640676Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 29 11:15:08.210591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154148724.mount: Deactivated successfully. Jan 29 11:15:08.768194 containerd[1444]: time="2025-01-29T11:15:08.768146650Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:08.769063 containerd[1444]: time="2025-01-29T11:15:08.768838426Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Jan 29 11:15:08.769856 containerd[1444]: time="2025-01-29T11:15:08.769798159Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:08.772795 containerd[1444]: time="2025-01-29T11:15:08.772750370Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:08.773956 containerd[1444]: time="2025-01-29T11:15:08.773884047Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.585204645s" Jan 29 11:15:08.773956 containerd[1444]: time="2025-01-29T11:15:08.773914171Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Jan 29 11:15:08.776296 containerd[1444]: time="2025-01-29T11:15:08.776168365Z" level=info msg="CreateContainer within sandbox \"0eed67205f4e285f09122e086239ef784bff697738da5ffd28cc448d77cbfc3e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:15:08.785163 containerd[1444]: time="2025-01-29T11:15:08.785128650Z" level=info msg="CreateContainer within sandbox \"0eed67205f4e285f09122e086239ef784bff697738da5ffd28cc448d77cbfc3e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7a3bc422ce243a833e6bdc95669d9fb5ffb1fcd831cd0d8c45c2cf70497b397f\"" Jan 29 11:15:08.785775 containerd[1444]: time="2025-01-29T11:15:08.785651483Z" level=info msg="StartContainer for \"7a3bc422ce243a833e6bdc95669d9fb5ffb1fcd831cd0d8c45c2cf70497b397f\"" Jan 29 11:15:08.807856 systemd[1]: Started cri-containerd-7a3bc422ce243a833e6bdc95669d9fb5ffb1fcd831cd0d8c45c2cf70497b397f.scope - libcontainer container 7a3bc422ce243a833e6bdc95669d9fb5ffb1fcd831cd0d8c45c2cf70497b397f. Jan 29 11:15:08.831535 containerd[1444]: time="2025-01-29T11:15:08.829291188Z" level=info msg="StartContainer for \"7a3bc422ce243a833e6bdc95669d9fb5ffb1fcd831cd0d8c45c2cf70497b397f\" returns successfully" Jan 29 11:15:08.832132 systemd[1]: cri-containerd-7a3bc422ce243a833e6bdc95669d9fb5ffb1fcd831cd0d8c45c2cf70497b397f.scope: Deactivated successfully. Jan 29 11:15:08.877972 kubelet[2473]: I0129 11:15:08.877509 2473 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:15:08.914006 systemd[1]: Created slice kubepods-burstable-pod30afe278_e679_4053_8ea3_2cb78a893cbb.slice - libcontainer container kubepods-burstable-pod30afe278_e679_4053_8ea3_2cb78a893cbb.slice. Jan 29 11:15:08.925546 systemd[1]: Created slice kubepods-burstable-podf63ecc27_e378_42d0_82e6_60ae3f3d15ee.slice - libcontainer container kubepods-burstable-podf63ecc27_e378_42d0_82e6_60ae3f3d15ee.slice. Jan 29 11:15:08.931572 kubelet[2473]: I0129 11:15:08.931433 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl2rm\" (UniqueName: \"kubernetes.io/projected/f63ecc27-e378-42d0-82e6-60ae3f3d15ee-kube-api-access-vl2rm\") pod \"coredns-6f6b679f8f-r9tr2\" (UID: \"f63ecc27-e378-42d0-82e6-60ae3f3d15ee\") " pod="kube-system/coredns-6f6b679f8f-r9tr2" Jan 29 11:15:08.931572 kubelet[2473]: I0129 11:15:08.931475 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30afe278-e679-4053-8ea3-2cb78a893cbb-config-volume\") pod \"coredns-6f6b679f8f-479pj\" (UID: \"30afe278-e679-4053-8ea3-2cb78a893cbb\") " pod="kube-system/coredns-6f6b679f8f-479pj" Jan 29 11:15:08.931572 kubelet[2473]: I0129 11:15:08.931493 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f63ecc27-e378-42d0-82e6-60ae3f3d15ee-config-volume\") pod \"coredns-6f6b679f8f-r9tr2\" (UID: \"f63ecc27-e378-42d0-82e6-60ae3f3d15ee\") " pod="kube-system/coredns-6f6b679f8f-r9tr2" Jan 29 11:15:08.931572 kubelet[2473]: I0129 11:15:08.931511 2473 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8strq\" (UniqueName: \"kubernetes.io/projected/30afe278-e679-4053-8ea3-2cb78a893cbb-kube-api-access-8strq\") pod \"coredns-6f6b679f8f-479pj\" (UID: \"30afe278-e679-4053-8ea3-2cb78a893cbb\") " pod="kube-system/coredns-6f6b679f8f-479pj" Jan 29 11:15:08.934981 containerd[1444]: time="2025-01-29T11:15:08.934844619Z" level=info msg="shim disconnected" id=7a3bc422ce243a833e6bdc95669d9fb5ffb1fcd831cd0d8c45c2cf70497b397f namespace=k8s.io Jan 29 11:15:08.934981 containerd[1444]: time="2025-01-29T11:15:08.934898467Z" level=warning msg="cleaning up after shim disconnected" id=7a3bc422ce243a833e6bdc95669d9fb5ffb1fcd831cd0d8c45c2cf70497b397f namespace=k8s.io Jan 29 11:15:08.934981 containerd[1444]: time="2025-01-29T11:15:08.934907388Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:09.141617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a3bc422ce243a833e6bdc95669d9fb5ffb1fcd831cd0d8c45c2cf70497b397f-rootfs.mount: Deactivated successfully. Jan 29 11:15:09.193851 kubelet[2473]: E0129 11:15:09.193811 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:09.196982 containerd[1444]: time="2025-01-29T11:15:09.196948499Z" level=info msg="CreateContainer within sandbox \"0eed67205f4e285f09122e086239ef784bff697738da5ffd28cc448d77cbfc3e\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 29 11:15:09.211139 containerd[1444]: time="2025-01-29T11:15:09.211093006Z" level=info msg="CreateContainer within sandbox \"0eed67205f4e285f09122e086239ef784bff697738da5ffd28cc448d77cbfc3e\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"d38274280e2b3bc9751c7dee864bf68b6c79d986ec1f8a594bf075c1918ba6aa\"" Jan 29 11:15:09.211575 containerd[1444]: time="2025-01-29T11:15:09.211545106Z" level=info msg="StartContainer for \"d38274280e2b3bc9751c7dee864bf68b6c79d986ec1f8a594bf075c1918ba6aa\"" Jan 29 11:15:09.220876 kubelet[2473]: E0129 11:15:09.220450 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:09.221338 containerd[1444]: time="2025-01-29T11:15:09.221206741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-479pj,Uid:30afe278-e679-4053-8ea3-2cb78a893cbb,Namespace:kube-system,Attempt:0,}" Jan 29 11:15:09.230407 kubelet[2473]: E0129 11:15:09.230375 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:09.232858 containerd[1444]: time="2025-01-29T11:15:09.232826314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-r9tr2,Uid:f63ecc27-e378-42d0-82e6-60ae3f3d15ee,Namespace:kube-system,Attempt:0,}" Jan 29 11:15:09.242003 systemd[1]: Started cri-containerd-d38274280e2b3bc9751c7dee864bf68b6c79d986ec1f8a594bf075c1918ba6aa.scope - libcontainer container d38274280e2b3bc9751c7dee864bf68b6c79d986ec1f8a594bf075c1918ba6aa. Jan 29 11:15:09.276777 containerd[1444]: time="2025-01-29T11:15:09.274155689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-479pj,Uid:30afe278-e679-4053-8ea3-2cb78a893cbb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dabafe7c3d74673a2cac943bbd19769ce1f515a47fdf54e047778c48216a26df\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 11:15:09.276777 containerd[1444]: time="2025-01-29T11:15:09.274787412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-r9tr2,Uid:f63ecc27-e378-42d0-82e6-60ae3f3d15ee,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53060764fee97ebfc1b7b150800911070ca57ea5a136d208d788d73e6b7b48d3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 11:15:09.276930 kubelet[2473]: E0129 11:15:09.274398 2473 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dabafe7c3d74673a2cac943bbd19769ce1f515a47fdf54e047778c48216a26df\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 11:15:09.276930 kubelet[2473]: E0129 11:15:09.274558 2473 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dabafe7c3d74673a2cac943bbd19769ce1f515a47fdf54e047778c48216a26df\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-479pj" Jan 29 11:15:09.276930 kubelet[2473]: E0129 11:15:09.274584 2473 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dabafe7c3d74673a2cac943bbd19769ce1f515a47fdf54e047778c48216a26df\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-479pj" Jan 29 11:15:09.276930 kubelet[2473]: E0129 11:15:09.274620 2473 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-479pj_kube-system(30afe278-e679-4053-8ea3-2cb78a893cbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-479pj_kube-system(30afe278-e679-4053-8ea3-2cb78a893cbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dabafe7c3d74673a2cac943bbd19769ce1f515a47fdf54e047778c48216a26df\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-479pj" podUID="30afe278-e679-4053-8ea3-2cb78a893cbb" Jan 29 11:15:09.277478 kubelet[2473]: E0129 11:15:09.274928 2473 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53060764fee97ebfc1b7b150800911070ca57ea5a136d208d788d73e6b7b48d3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 11:15:09.277478 kubelet[2473]: E0129 11:15:09.274966 2473 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53060764fee97ebfc1b7b150800911070ca57ea5a136d208d788d73e6b7b48d3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-r9tr2" Jan 29 11:15:09.277478 kubelet[2473]: E0129 11:15:09.274982 2473 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53060764fee97ebfc1b7b150800911070ca57ea5a136d208d788d73e6b7b48d3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-r9tr2" Jan 29 11:15:09.277478 kubelet[2473]: E0129 11:15:09.275010 2473 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-r9tr2_kube-system(f63ecc27-e378-42d0-82e6-60ae3f3d15ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-r9tr2_kube-system(f63ecc27-e378-42d0-82e6-60ae3f3d15ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53060764fee97ebfc1b7b150800911070ca57ea5a136d208d788d73e6b7b48d3\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-r9tr2" podUID="f63ecc27-e378-42d0-82e6-60ae3f3d15ee" Jan 29 11:15:09.280549 containerd[1444]: time="2025-01-29T11:15:09.280514848Z" level=info msg="StartContainer for \"d38274280e2b3bc9751c7dee864bf68b6c79d986ec1f8a594bf075c1918ba6aa\" returns successfully" Jan 29 11:15:09.349400 kubelet[2473]: E0129 11:15:09.349193 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:10.109053 update_engine[1427]: I20250129 11:15:10.108974 1427 update_attempter.cc:509] Updating boot flags... Jan 29 11:15:10.128959 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3041) Jan 29 11:15:10.141039 systemd[1]: run-netns-cni\x2d6834efc6\x2df1b5\x2d7e25\x2dd00b\x2d0d075b05c294.mount: Deactivated successfully. Jan 29 11:15:10.141128 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dabafe7c3d74673a2cac943bbd19769ce1f515a47fdf54e047778c48216a26df-shm.mount: Deactivated successfully. Jan 29 11:15:10.153805 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3044) Jan 29 11:15:10.180758 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3044) Jan 29 11:15:10.199918 kubelet[2473]: E0129 11:15:10.199882 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:10.212263 kubelet[2473]: I0129 11:15:10.211189 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-ptmnb" podStartSLOduration=2.63843471 podStartE2EDuration="6.211174417s" podCreationTimestamp="2025-01-29 11:15:04 +0000 UTC" firstStartedPulling="2025-01-29 11:15:05.202180044 +0000 UTC m=+8.131326690" lastFinishedPulling="2025-01-29 11:15:08.774919751 +0000 UTC m=+11.704066397" observedRunningTime="2025-01-29 11:15:10.209462283 +0000 UTC m=+13.138608929" watchObservedRunningTime="2025-01-29 11:15:10.211174417 +0000 UTC m=+13.140321063" Jan 29 11:15:10.364715 systemd-networkd[1384]: flannel.1: Link UP Jan 29 11:15:10.364721 systemd-networkd[1384]: flannel.1: Gained carrier Jan 29 11:15:11.201540 kubelet[2473]: E0129 11:15:11.201507 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:11.578883 systemd-networkd[1384]: flannel.1: Gained IPv6LL Jan 29 11:15:21.927608 systemd[1]: Started sshd@5-10.0.0.125:22-10.0.0.1:54020.service - OpenSSH per-connection server daemon (10.0.0.1:54020). Jan 29 11:15:21.968821 sshd[3174]: Accepted publickey for core from 10.0.0.1 port 54020 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:15:21.970254 sshd-session[3174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:21.974534 systemd-logind[1425]: New session 6 of user core. Jan 29 11:15:21.983916 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:15:22.104270 sshd[3176]: Connection closed by 10.0.0.1 port 54020 Jan 29 11:15:22.104829 sshd-session[3174]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:22.107942 systemd[1]: sshd@5-10.0.0.125:22-10.0.0.1:54020.service: Deactivated successfully. Jan 29 11:15:22.109600 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:15:22.110257 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:15:22.111224 systemd-logind[1425]: Removed session 6. Jan 29 11:15:22.154936 kubelet[2473]: E0129 11:15:22.154891 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:22.155494 containerd[1444]: time="2025-01-29T11:15:22.155302468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-479pj,Uid:30afe278-e679-4053-8ea3-2cb78a893cbb,Namespace:kube-system,Attempt:0,}" Jan 29 11:15:22.184833 systemd-networkd[1384]: cni0: Link UP Jan 29 11:15:22.185317 systemd-networkd[1384]: cni0: Gained carrier Jan 29 11:15:22.185622 systemd-networkd[1384]: cni0: Lost carrier Jan 29 11:15:22.190881 systemd-networkd[1384]: vethd51c0a54: Link UP Jan 29 11:15:22.192033 kernel: cni0: port 1(vethd51c0a54) entered blocking state Jan 29 11:15:22.192080 kernel: cni0: port 1(vethd51c0a54) entered disabled state Jan 29 11:15:22.192744 kernel: vethd51c0a54: entered allmulticast mode Jan 29 11:15:22.194114 kernel: vethd51c0a54: entered promiscuous mode Jan 29 11:15:22.194168 kernel: cni0: port 1(vethd51c0a54) entered blocking state Jan 29 11:15:22.194181 kernel: cni0: port 1(vethd51c0a54) entered forwarding state Jan 29 11:15:22.195757 kernel: cni0: port 1(vethd51c0a54) entered disabled state Jan 29 11:15:22.206311 kernel: cni0: port 1(vethd51c0a54) entered blocking state Jan 29 11:15:22.206389 kernel: cni0: port 1(vethd51c0a54) entered forwarding state Jan 29 11:15:22.206336 systemd-networkd[1384]: vethd51c0a54: Gained carrier Jan 29 11:15:22.206574 systemd-networkd[1384]: cni0: Gained carrier Jan 29 11:15:22.208023 containerd[1444]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000018938), "name":"cbr0", "type":"bridge"} Jan 29 11:15:22.208023 containerd[1444]: delegateAdd: netconf sent to delegate plugin: Jan 29 11:15:22.223955 containerd[1444]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T11:15:22.223874901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:15:22.224193 containerd[1444]: time="2025-01-29T11:15:22.223936665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:15:22.224193 containerd[1444]: time="2025-01-29T11:15:22.223951026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:22.224193 containerd[1444]: time="2025-01-29T11:15:22.224038993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:22.248003 systemd[1]: Started cri-containerd-bbd2a2b0dd1d73307cc6f551a9420f7a0bb81361ec2de5ad4d0d181912bd5c51.scope - libcontainer container bbd2a2b0dd1d73307cc6f551a9420f7a0bb81361ec2de5ad4d0d181912bd5c51. Jan 29 11:15:22.258973 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:15:22.275257 containerd[1444]: time="2025-01-29T11:15:22.275171806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-479pj,Uid:30afe278-e679-4053-8ea3-2cb78a893cbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbd2a2b0dd1d73307cc6f551a9420f7a0bb81361ec2de5ad4d0d181912bd5c51\"" Jan 29 11:15:22.275984 kubelet[2473]: E0129 11:15:22.275953 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:22.279315 containerd[1444]: time="2025-01-29T11:15:22.278937798Z" level=info msg="CreateContainer within sandbox \"bbd2a2b0dd1d73307cc6f551a9420f7a0bb81361ec2de5ad4d0d181912bd5c51\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:15:22.300080 containerd[1444]: time="2025-01-29T11:15:22.300027801Z" level=info msg="CreateContainer within sandbox \"bbd2a2b0dd1d73307cc6f551a9420f7a0bb81361ec2de5ad4d0d181912bd5c51\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f02ee3012683f632410ed21de7c3d22ce4924c4b7fa0d9d49b104a3383954cb\"" Jan 29 11:15:22.301005 containerd[1444]: time="2025-01-29T11:15:22.300905505Z" level=info msg="StartContainer for \"1f02ee3012683f632410ed21de7c3d22ce4924c4b7fa0d9d49b104a3383954cb\"" Jan 29 11:15:22.329932 systemd[1]: Started cri-containerd-1f02ee3012683f632410ed21de7c3d22ce4924c4b7fa0d9d49b104a3383954cb.scope - libcontainer container 1f02ee3012683f632410ed21de7c3d22ce4924c4b7fa0d9d49b104a3383954cb. Jan 29 11:15:22.360484 containerd[1444]: time="2025-01-29T11:15:22.360389841Z" level=info msg="StartContainer for \"1f02ee3012683f632410ed21de7c3d22ce4924c4b7fa0d9d49b104a3383954cb\" returns successfully" Jan 29 11:15:23.155691 kubelet[2473]: E0129 11:15:23.155217 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:23.156838 containerd[1444]: time="2025-01-29T11:15:23.156599874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-r9tr2,Uid:f63ecc27-e378-42d0-82e6-60ae3f3d15ee,Namespace:kube-system,Attempt:0,}" Jan 29 11:15:23.184141 systemd-networkd[1384]: veth057592d2: Link UP Jan 29 11:15:23.187216 kernel: cni0: port 2(veth057592d2) entered blocking state Jan 29 11:15:23.187325 kernel: cni0: port 2(veth057592d2) entered disabled state Jan 29 11:15:23.187344 kernel: veth057592d2: entered allmulticast mode Jan 29 11:15:23.187991 kernel: veth057592d2: entered promiscuous mode Jan 29 11:15:23.188756 kernel: cni0: port 2(veth057592d2) entered blocking state Jan 29 11:15:23.188807 kernel: cni0: port 2(veth057592d2) entered forwarding state Jan 29 11:15:23.192827 kernel: cni0: port 2(veth057592d2) entered disabled state Jan 29 11:15:23.198150 kernel: cni0: port 2(veth057592d2) entered blocking state Jan 29 11:15:23.198219 kernel: cni0: port 2(veth057592d2) entered forwarding state Jan 29 11:15:23.198024 systemd-networkd[1384]: veth057592d2: Gained carrier Jan 29 11:15:23.199656 containerd[1444]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000018938), "name":"cbr0", "type":"bridge"} Jan 29 11:15:23.199656 containerd[1444]: delegateAdd: netconf sent to delegate plugin: Jan 29 11:15:23.220214 containerd[1444]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T11:15:23.219550201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:15:23.220214 containerd[1444]: time="2025-01-29T11:15:23.219944749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:15:23.220214 containerd[1444]: time="2025-01-29T11:15:23.219957630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:23.220214 containerd[1444]: time="2025-01-29T11:15:23.220041916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:23.245383 kubelet[2473]: E0129 11:15:23.244947 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:23.246930 systemd[1]: Started cri-containerd-baee2487c09b9d8fcd168bc0873af1242a80db7e0cf0a6b8c0017e3d31488e53.scope - libcontainer container baee2487c09b9d8fcd168bc0873af1242a80db7e0cf0a6b8c0017e3d31488e53. Jan 29 11:15:23.267701 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:15:23.268138 kubelet[2473]: I0129 11:15:23.267249 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-479pj" podStartSLOduration=19.26723335 podStartE2EDuration="19.26723335s" podCreationTimestamp="2025-01-29 11:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:15:23.267143144 +0000 UTC m=+26.196289750" watchObservedRunningTime="2025-01-29 11:15:23.26723335 +0000 UTC m=+26.196379956" Jan 29 11:15:23.293086 containerd[1444]: time="2025-01-29T11:15:23.293049141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-r9tr2,Uid:f63ecc27-e378-42d0-82e6-60ae3f3d15ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"baee2487c09b9d8fcd168bc0873af1242a80db7e0cf0a6b8c0017e3d31488e53\"" Jan 29 11:15:23.294384 kubelet[2473]: E0129 11:15:23.293951 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:23.297872 containerd[1444]: time="2025-01-29T11:15:23.297710785Z" level=info msg="CreateContainer within sandbox \"baee2487c09b9d8fcd168bc0873af1242a80db7e0cf0a6b8c0017e3d31488e53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:15:23.309215 containerd[1444]: time="2025-01-29T11:15:23.309167420Z" level=info msg="CreateContainer within sandbox \"baee2487c09b9d8fcd168bc0873af1242a80db7e0cf0a6b8c0017e3d31488e53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dc1fa31accbfd60e58a97840ccd73daf3ca00022b3b46e50fd5c980bc7169afe\"" Jan 29 11:15:23.310637 containerd[1444]: time="2025-01-29T11:15:23.310608560Z" level=info msg="StartContainer for \"dc1fa31accbfd60e58a97840ccd73daf3ca00022b3b46e50fd5c980bc7169afe\"" Jan 29 11:15:23.334944 systemd[1]: Started cri-containerd-dc1fa31accbfd60e58a97840ccd73daf3ca00022b3b46e50fd5c980bc7169afe.scope - libcontainer container dc1fa31accbfd60e58a97840ccd73daf3ca00022b3b46e50fd5c980bc7169afe. Jan 29 11:15:23.363251 containerd[1444]: time="2025-01-29T11:15:23.363206089Z" level=info msg="StartContainer for \"dc1fa31accbfd60e58a97840ccd73daf3ca00022b3b46e50fd5c980bc7169afe\" returns successfully" Jan 29 11:15:23.802913 systemd-networkd[1384]: cni0: Gained IPv6LL Jan 29 11:15:23.994885 systemd-networkd[1384]: vethd51c0a54: Gained IPv6LL Jan 29 11:15:24.260312 kubelet[2473]: E0129 11:15:24.260160 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:24.260312 kubelet[2473]: E0129 11:15:24.260178 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:24.268986 kubelet[2473]: I0129 11:15:24.268828 2473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-r9tr2" podStartSLOduration=20.268813732 podStartE2EDuration="20.268813732s" podCreationTimestamp="2025-01-29 11:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:15:24.26863384 +0000 UTC m=+27.197780526" watchObservedRunningTime="2025-01-29 11:15:24.268813732 +0000 UTC m=+27.197960338" Jan 29 11:15:25.018839 systemd-networkd[1384]: veth057592d2: Gained IPv6LL Jan 29 11:15:25.261910 kubelet[2473]: E0129 11:15:25.261837 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:27.118984 systemd[1]: Started sshd@6-10.0.0.125:22-10.0.0.1:52530.service - OpenSSH per-connection server daemon (10.0.0.1:52530). Jan 29 11:15:27.160188 sshd[3448]: Accepted publickey for core from 10.0.0.1 port 52530 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:15:27.163631 sshd-session[3448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:27.167765 systemd-logind[1425]: New session 7 of user core. Jan 29 11:15:27.178911 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:15:27.291662 sshd[3450]: Connection closed by 10.0.0.1 port 52530 Jan 29 11:15:27.292027 sshd-session[3448]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:27.295686 systemd[1]: sshd@6-10.0.0.125:22-10.0.0.1:52530.service: Deactivated successfully. Jan 29 11:15:27.297335 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:15:27.299400 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:15:27.300220 systemd-logind[1425]: Removed session 7. Jan 29 11:15:29.231386 kubelet[2473]: E0129 11:15:29.231339 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:29.268503 kubelet[2473]: E0129 11:15:29.268477 2473 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:32.305388 systemd[1]: Started sshd@7-10.0.0.125:22-10.0.0.1:52542.service - OpenSSH per-connection server daemon (10.0.0.1:52542). Jan 29 11:15:32.343959 sshd[3491]: Accepted publickey for core from 10.0.0.1 port 52542 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:15:32.345155 sshd-session[3491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:32.348862 systemd-logind[1425]: New session 8 of user core. Jan 29 11:15:32.359927 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:15:32.470077 sshd[3493]: Connection closed by 10.0.0.1 port 52542 Jan 29 11:15:32.470580 sshd-session[3491]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:32.481373 systemd[1]: sshd@7-10.0.0.125:22-10.0.0.1:52542.service: Deactivated successfully. Jan 29 11:15:32.483520 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:15:32.484765 systemd-logind[1425]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:15:32.487956 systemd[1]: Started sshd@8-10.0.0.125:22-10.0.0.1:58216.service - OpenSSH per-connection server daemon (10.0.0.1:58216). Jan 29 11:15:32.490138 systemd-logind[1425]: Removed session 8. Jan 29 11:15:32.539868 sshd[3506]: Accepted publickey for core from 10.0.0.1 port 58216 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:15:32.541082 sshd-session[3506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:32.544788 systemd-logind[1425]: New session 9 of user core. Jan 29 11:15:32.556916 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:15:32.699667 sshd[3508]: Connection closed by 10.0.0.1 port 58216 Jan 29 11:15:32.700189 sshd-session[3506]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:32.708551 systemd[1]: sshd@8-10.0.0.125:22-10.0.0.1:58216.service: Deactivated successfully. Jan 29 11:15:32.710994 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:15:32.714111 systemd-logind[1425]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:15:32.719187 systemd[1]: Started sshd@9-10.0.0.125:22-10.0.0.1:58218.service - OpenSSH per-connection server daemon (10.0.0.1:58218). Jan 29 11:15:32.719951 systemd-logind[1425]: Removed session 9. Jan 29 11:15:32.766710 sshd[3519]: Accepted publickey for core from 10.0.0.1 port 58218 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:15:32.767244 sshd-session[3519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:32.771160 systemd-logind[1425]: New session 10 of user core. Jan 29 11:15:32.781888 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:15:32.894079 sshd[3521]: Connection closed by 10.0.0.1 port 58218 Jan 29 11:15:32.893271 sshd-session[3519]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:32.897072 systemd[1]: sshd@9-10.0.0.125:22-10.0.0.1:58218.service: Deactivated successfully. Jan 29 11:15:32.899192 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:15:32.899924 systemd-logind[1425]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:15:32.900707 systemd-logind[1425]: Removed session 10. Jan 29 11:15:37.905220 systemd[1]: Started sshd@10-10.0.0.125:22-10.0.0.1:58228.service - OpenSSH per-connection server daemon (10.0.0.1:58228). Jan 29 11:15:37.943365 sshd[3558]: Accepted publickey for core from 10.0.0.1 port 58228 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:15:37.944543 sshd-session[3558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:37.948448 systemd-logind[1425]: New session 11 of user core. Jan 29 11:15:37.961935 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:15:38.073553 sshd[3560]: Connection closed by 10.0.0.1 port 58228 Jan 29 11:15:38.074051 sshd-session[3558]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:38.084410 systemd[1]: sshd@10-10.0.0.125:22-10.0.0.1:58228.service: Deactivated successfully. Jan 29 11:15:38.086089 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:15:38.088481 systemd-logind[1425]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:15:38.098003 systemd[1]: Started sshd@11-10.0.0.125:22-10.0.0.1:58232.service - OpenSSH per-connection server daemon (10.0.0.1:58232). Jan 29 11:15:38.099313 systemd-logind[1425]: Removed session 11. Jan 29 11:15:38.134096 sshd[3573]: Accepted publickey for core from 10.0.0.1 port 58232 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:15:38.135216 sshd-session[3573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:38.138653 systemd-logind[1425]: New session 12 of user core. Jan 29 11:15:38.146900 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:15:38.330787 sshd[3575]: Connection closed by 10.0.0.1 port 58232 Jan 29 11:15:38.331275 sshd-session[3573]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:38.343154 systemd[1]: sshd@11-10.0.0.125:22-10.0.0.1:58232.service: Deactivated successfully. Jan 29 11:15:38.344682 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:15:38.346973 systemd-logind[1425]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:15:38.349008 systemd[1]: Started sshd@12-10.0.0.125:22-10.0.0.1:58244.service - OpenSSH per-connection server daemon (10.0.0.1:58244). Jan 29 11:15:38.349822 systemd-logind[1425]: Removed session 12. Jan 29 11:15:38.385918 sshd[3585]: Accepted publickey for core from 10.0.0.1 port 58244 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:15:38.387084 sshd-session[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:38.390823 systemd-logind[1425]: New session 13 of user core. Jan 29 11:15:38.400888 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:15:39.525414 sshd[3587]: Connection closed by 10.0.0.1 port 58244 Jan 29 11:15:39.525897 sshd-session[3585]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:39.534492 systemd[1]: sshd@12-10.0.0.125:22-10.0.0.1:58244.service: Deactivated successfully. Jan 29 11:15:39.538284 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:15:39.544849 systemd-logind[1425]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:15:39.548014 systemd[1]: Started sshd@13-10.0.0.125:22-10.0.0.1:58256.service - OpenSSH per-connection server daemon (10.0.0.1:58256). Jan 29 11:15:39.550837 systemd-logind[1425]: Removed session 13. Jan 29 11:15:39.607489 sshd[3606]: Accepted publickey for core from 10.0.0.1 port 58256 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:15:39.608780 sshd-session[3606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:39.612237 systemd-logind[1425]: New session 14 of user core. Jan 29 11:15:39.623934 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:15:39.836708 sshd[3609]: Connection closed by 10.0.0.1 port 58256 Jan 29 11:15:39.837044 sshd-session[3606]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:39.853334 systemd[1]: sshd@13-10.0.0.125:22-10.0.0.1:58256.service: Deactivated successfully. Jan 29 11:15:39.854939 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:15:39.856216 systemd-logind[1425]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:15:39.857419 systemd[1]: Started sshd@14-10.0.0.125:22-10.0.0.1:58266.service - OpenSSH per-connection server daemon (10.0.0.1:58266). Jan 29 11:15:39.858402 systemd-logind[1425]: Removed session 14. Jan 29 11:15:39.896475 sshd[3620]: Accepted publickey for core from 10.0.0.1 port 58266 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:15:39.897738 sshd-session[3620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:39.901349 systemd-logind[1425]: New session 15 of user core. Jan 29 11:15:39.915908 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:15:40.024362 sshd[3622]: Connection closed by 10.0.0.1 port 58266 Jan 29 11:15:40.024905 sshd-session[3620]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:40.027376 systemd[1]: sshd@14-10.0.0.125:22-10.0.0.1:58266.service: Deactivated successfully. Jan 29 11:15:40.029856 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:15:40.031571 systemd-logind[1425]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:15:40.032579 systemd-logind[1425]: Removed session 15. Jan 29 11:15:45.036956 systemd[1]: Started sshd@15-10.0.0.125:22-10.0.0.1:32874.service - OpenSSH per-connection server daemon (10.0.0.1:32874). Jan 29 11:15:45.076438 sshd[3659]: Accepted publickey for core from 10.0.0.1 port 32874 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:15:45.077598 sshd-session[3659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:45.081129 systemd-logind[1425]: New session 16 of user core. Jan 29 11:15:45.089973 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:15:45.199290 sshd[3661]: Connection closed by 10.0.0.1 port 32874 Jan 29 11:15:45.199813 sshd-session[3659]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:45.203112 systemd[1]: sshd@15-10.0.0.125:22-10.0.0.1:32874.service: Deactivated successfully. Jan 29 11:15:45.205972 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:15:45.206564 systemd-logind[1425]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:15:45.207557 systemd-logind[1425]: Removed session 16. Jan 29 11:15:50.210263 systemd[1]: Started sshd@16-10.0.0.125:22-10.0.0.1:32880.service - OpenSSH per-connection server daemon (10.0.0.1:32880). Jan 29 11:15:50.248160 sshd[3694]: Accepted publickey for core from 10.0.0.1 port 32880 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:15:50.249410 sshd-session[3694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:50.252790 systemd-logind[1425]: New session 17 of user core. Jan 29 11:15:50.261881 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:15:50.370580 sshd[3696]: Connection closed by 10.0.0.1 port 32880 Jan 29 11:15:50.371113 sshd-session[3694]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:50.374405 systemd[1]: sshd@16-10.0.0.125:22-10.0.0.1:32880.service: Deactivated successfully. Jan 29 11:15:50.376199 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:15:50.377187 systemd-logind[1425]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:15:50.378107 systemd-logind[1425]: Removed session 17. Jan 29 11:15:55.382323 systemd[1]: Started sshd@17-10.0.0.125:22-10.0.0.1:58992.service - OpenSSH per-connection server daemon (10.0.0.1:58992). Jan 29 11:15:55.421557 sshd[3729]: Accepted publickey for core from 10.0.0.1 port 58992 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:15:55.422849 sshd-session[3729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:55.426623 systemd-logind[1425]: New session 18 of user core. Jan 29 11:15:55.434897 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:15:55.545643 sshd[3731]: Connection closed by 10.0.0.1 port 58992 Jan 29 11:15:55.546358 sshd-session[3729]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:55.550113 systemd[1]: sshd@17-10.0.0.125:22-10.0.0.1:58992.service: Deactivated successfully. Jan 29 11:15:55.551949 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:15:55.552684 systemd-logind[1425]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:15:55.553629 systemd-logind[1425]: Removed session 18.