Jan 13 21:18:13.910255 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 21:18:13.910275 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 13 21:18:13.910285 kernel: KASLR enabled Jan 13 21:18:13.910291 kernel: efi: EFI v2.7 by EDK II Jan 13 21:18:13.910297 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 13 21:18:13.910303 kernel: random: crng init done Jan 13 21:18:13.910310 kernel: ACPI: Early table checksum verification disabled Jan 13 21:18:13.910316 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 13 21:18:13.910322 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 13 21:18:13.910329 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:18:13.910335 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:18:13.910341 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:18:13.910347 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:18:13.910353 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:18:13.910361 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:18:13.910369 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:18:13.910376 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:18:13.910382 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:18:13.910388 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 13 21:18:13.910394 kernel: NUMA: Failed to initialise from firmware Jan 13 21:18:13.910401 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:18:13.910407 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 13 21:18:13.910413 kernel: Zone ranges: Jan 13 21:18:13.910420 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:18:13.910426 kernel: DMA32 empty Jan 13 21:18:13.910434 kernel: Normal empty Jan 13 21:18:13.910440 kernel: Movable zone start for each node Jan 13 21:18:13.910446 kernel: Early memory node ranges Jan 13 21:18:13.910453 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 13 21:18:13.910459 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 13 21:18:13.910465 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 13 21:18:13.910472 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 13 21:18:13.910478 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 13 21:18:13.910484 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 13 21:18:13.910490 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 13 21:18:13.910497 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:18:13.910503 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 13 21:18:13.910511 kernel: psci: probing for conduit method from ACPI. Jan 13 21:18:13.910517 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 21:18:13.910524 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 21:18:13.910532 kernel: psci: Trusted OS migration not required Jan 13 21:18:13.910539 kernel: psci: SMC Calling Convention v1.1 Jan 13 21:18:13.910546 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 21:18:13.910554 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 21:18:13.910561 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 21:18:13.910568 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 21:18:13.910574 kernel: Detected PIPT I-cache on CPU0 Jan 13 21:18:13.910581 kernel: CPU features: detected: GIC system register CPU interface Jan 13 21:18:13.910588 kernel: CPU features: detected: Hardware dirty bit management Jan 13 21:18:13.910655 kernel: CPU features: detected: Spectre-v4 Jan 13 21:18:13.910663 kernel: CPU features: detected: Spectre-BHB Jan 13 21:18:13.910670 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 21:18:13.910676 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 21:18:13.910686 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 21:18:13.910693 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 21:18:13.910700 kernel: alternatives: applying boot alternatives Jan 13 21:18:13.910708 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:18:13.910715 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:18:13.910722 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:18:13.910729 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:18:13.910736 kernel: Fallback order for Node 0: 0 Jan 13 21:18:13.910743 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 13 21:18:13.910749 kernel: Policy zone: DMA Jan 13 21:18:13.910756 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:18:13.910764 kernel: software IO TLB: area num 4. Jan 13 21:18:13.910771 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 13 21:18:13.910779 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 13 21:18:13.910786 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:18:13.910792 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:18:13.910800 kernel: rcu: RCU event tracing is enabled. Jan 13 21:18:13.910807 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:18:13.910814 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:18:13.910821 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:18:13.910827 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:18:13.910834 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:18:13.910841 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 21:18:13.910849 kernel: GICv3: 256 SPIs implemented Jan 13 21:18:13.910856 kernel: GICv3: 0 Extended SPIs implemented Jan 13 21:18:13.910863 kernel: Root IRQ handler: gic_handle_irq Jan 13 21:18:13.910869 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 21:18:13.910876 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 21:18:13.910883 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 21:18:13.910890 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 21:18:13.910897 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 21:18:13.910904 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 13 21:18:13.910910 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 13 21:18:13.910917 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:18:13.910926 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:18:13.910933 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 21:18:13.910940 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 21:18:13.910947 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 21:18:13.910954 kernel: arm-pv: using stolen time PV Jan 13 21:18:13.910961 kernel: Console: colour dummy device 80x25 Jan 13 21:18:13.910980 kernel: ACPI: Core revision 20230628 Jan 13 21:18:13.910988 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 21:18:13.910996 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:18:13.911003 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:18:13.911011 kernel: landlock: Up and running. Jan 13 21:18:13.911018 kernel: SELinux: Initializing. Jan 13 21:18:13.911025 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:18:13.911032 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:18:13.911039 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:18:13.911047 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:18:13.911054 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:18:13.911061 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:18:13.911068 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 21:18:13.911076 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 21:18:13.911083 kernel: Remapping and enabling EFI services. Jan 13 21:18:13.911090 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:18:13.911098 kernel: Detected PIPT I-cache on CPU1 Jan 13 21:18:13.911105 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 21:18:13.911112 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 13 21:18:13.911119 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:18:13.911126 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 21:18:13.911133 kernel: Detected PIPT I-cache on CPU2 Jan 13 21:18:13.911139 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 21:18:13.911148 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 13 21:18:13.911156 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:18:13.911167 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 21:18:13.911176 kernel: Detected PIPT I-cache on CPU3 Jan 13 21:18:13.911183 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 21:18:13.911190 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 13 21:18:13.911204 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:18:13.911212 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 21:18:13.911220 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:18:13.911229 kernel: SMP: Total of 4 processors activated. Jan 13 21:18:13.911236 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 21:18:13.911244 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 21:18:13.911251 kernel: CPU features: detected: Common not Private translations Jan 13 21:18:13.911258 kernel: CPU features: detected: CRC32 instructions Jan 13 21:18:13.911265 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 21:18:13.911273 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 21:18:13.911280 kernel: CPU features: detected: LSE atomic instructions Jan 13 21:18:13.911289 kernel: CPU features: detected: Privileged Access Never Jan 13 21:18:13.911296 kernel: CPU features: detected: RAS Extension Support Jan 13 21:18:13.911303 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 21:18:13.911311 kernel: CPU: All CPU(s) started at EL1 Jan 13 21:18:13.911318 kernel: alternatives: applying system-wide alternatives Jan 13 21:18:13.911325 kernel: devtmpfs: initialized Jan 13 21:18:13.911332 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:18:13.911340 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:18:13.911347 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:18:13.911355 kernel: SMBIOS 3.0.0 present. Jan 13 21:18:13.911363 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 13 21:18:13.911370 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:18:13.911378 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 21:18:13.911385 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 21:18:13.911392 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 21:18:13.911399 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:18:13.911407 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Jan 13 21:18:13.911414 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:18:13.911423 kernel: cpuidle: using governor menu Jan 13 21:18:13.911430 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 21:18:13.911437 kernel: ASID allocator initialised with 32768 entries Jan 13 21:18:13.911444 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:18:13.911451 kernel: Serial: AMBA PL011 UART driver Jan 13 21:18:13.911458 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 21:18:13.911466 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 21:18:13.911473 kernel: Modules: 509040 pages in range for PLT usage Jan 13 21:18:13.911480 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:18:13.911489 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:18:13.911497 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 21:18:13.911504 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 21:18:13.911511 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:18:13.911518 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:18:13.911526 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 21:18:13.911533 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 21:18:13.911540 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:18:13.911547 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:18:13.911556 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:18:13.911563 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:18:13.911570 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:18:13.911578 kernel: ACPI: Interpreter enabled Jan 13 21:18:13.911585 kernel: ACPI: Using GIC for interrupt routing Jan 13 21:18:13.911598 kernel: ACPI: MCFG table detected, 1 entries Jan 13 21:18:13.911606 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 21:18:13.911614 kernel: printk: console [ttyAMA0] enabled Jan 13 21:18:13.911621 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:18:13.911755 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:18:13.911831 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 21:18:13.911897 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 21:18:13.911960 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 21:18:13.912022 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 21:18:13.912032 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 21:18:13.912039 kernel: PCI host bridge to bus 0000:00 Jan 13 21:18:13.912112 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 21:18:13.912171 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 21:18:13.912241 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 21:18:13.912300 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:18:13.912388 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 21:18:13.912462 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:18:13.912532 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 13 21:18:13.912636 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 13 21:18:13.912708 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 21:18:13.912771 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 21:18:13.912834 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 13 21:18:13.912896 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 13 21:18:13.912954 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 21:18:13.913015 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 21:18:13.913071 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 21:18:13.913080 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 21:18:13.913088 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 21:18:13.913096 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 21:18:13.913103 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 21:18:13.913110 kernel: iommu: Default domain type: Translated Jan 13 21:18:13.913118 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 21:18:13.913125 kernel: efivars: Registered efivars operations Jan 13 21:18:13.913135 kernel: vgaarb: loaded Jan 13 21:18:13.913142 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 21:18:13.913149 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:18:13.913157 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:18:13.913164 kernel: pnp: PnP ACPI init Jan 13 21:18:13.913246 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 21:18:13.913258 kernel: pnp: PnP ACPI: found 1 devices Jan 13 21:18:13.913265 kernel: NET: Registered PF_INET protocol family Jan 13 21:18:13.913275 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:18:13.913282 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:18:13.913290 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:18:13.913297 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:18:13.913305 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:18:13.913312 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:18:13.913319 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:18:13.913327 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:18:13.913334 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:18:13.913343 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:18:13.913350 kernel: kvm [1]: HYP mode not available Jan 13 21:18:13.913357 kernel: Initialise system trusted keyrings Jan 13 21:18:13.913365 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:18:13.913372 kernel: Key type asymmetric registered Jan 13 21:18:13.913379 kernel: Asymmetric key parser 'x509' registered Jan 13 21:18:13.913387 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 21:18:13.913394 kernel: io scheduler mq-deadline registered Jan 13 21:18:13.913401 kernel: io scheduler kyber registered Jan 13 21:18:13.913410 kernel: io scheduler bfq registered Jan 13 21:18:13.913417 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 21:18:13.913425 kernel: ACPI: button: Power Button [PWRB] Jan 13 21:18:13.913432 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 21:18:13.913497 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 13 21:18:13.913507 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:18:13.913514 kernel: thunder_xcv, ver 1.0 Jan 13 21:18:13.913522 kernel: thunder_bgx, ver 1.0 Jan 13 21:18:13.913529 kernel: nicpf, ver 1.0 Jan 13 21:18:13.913538 kernel: nicvf, ver 1.0 Jan 13 21:18:13.913618 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 21:18:13.913681 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T21:18:13 UTC (1736803093) Jan 13 21:18:13.913691 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 21:18:13.913698 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 21:18:13.913706 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 21:18:13.913713 kernel: watchdog: Hard watchdog permanently disabled Jan 13 21:18:13.913721 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:18:13.913730 kernel: Segment Routing with IPv6 Jan 13 21:18:13.913737 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:18:13.913745 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:18:13.913752 kernel: Key type dns_resolver registered Jan 13 21:18:13.913759 kernel: registered taskstats version 1 Jan 13 21:18:13.913767 kernel: Loading compiled-in X.509 certificates Jan 13 21:18:13.913774 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 13 21:18:13.913781 kernel: Key type .fscrypt registered Jan 13 21:18:13.913789 kernel: Key type fscrypt-provisioning registered Jan 13 21:18:13.913797 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:18:13.913805 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:18:13.913812 kernel: ima: No architecture policies found Jan 13 21:18:13.913820 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 21:18:13.913827 kernel: clk: Disabling unused clocks Jan 13 21:18:13.913834 kernel: Freeing unused kernel memory: 39360K Jan 13 21:18:13.913842 kernel: Run /init as init process Jan 13 21:18:13.913849 kernel: with arguments: Jan 13 21:18:13.913856 kernel: /init Jan 13 21:18:13.913864 kernel: with environment: Jan 13 21:18:13.913871 kernel: HOME=/ Jan 13 21:18:13.913879 kernel: TERM=linux Jan 13 21:18:13.913886 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:18:13.913895 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:18:13.913904 systemd[1]: Detected virtualization kvm. Jan 13 21:18:13.913912 systemd[1]: Detected architecture arm64. Jan 13 21:18:13.913920 systemd[1]: Running in initrd. Jan 13 21:18:13.913929 systemd[1]: No hostname configured, using default hostname. Jan 13 21:18:13.913936 systemd[1]: Hostname set to . Jan 13 21:18:13.913944 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:18:13.913952 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:18:13.913960 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:18:13.913968 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:18:13.913976 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:18:13.913984 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:18:13.913993 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:18:13.914001 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:18:13.914011 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:18:13.914019 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:18:13.914027 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:18:13.914035 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:18:13.914044 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:18:13.914052 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:18:13.914060 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:18:13.914068 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:18:13.914076 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:18:13.914084 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:18:13.914092 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:18:13.914100 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:18:13.914108 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:18:13.914118 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:18:13.914126 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:18:13.914134 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:18:13.914142 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:18:13.914150 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:18:13.914158 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:18:13.914166 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:18:13.914174 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:18:13.914181 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:18:13.914191 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:18:13.914205 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:18:13.914214 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:18:13.914221 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:18:13.914230 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:18:13.914240 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:18:13.914264 systemd-journald[238]: Collecting audit messages is disabled. Jan 13 21:18:13.914283 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:18:13.914293 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:18:13.914302 systemd-journald[238]: Journal started Jan 13 21:18:13.914321 systemd-journald[238]: Runtime Journal (/run/log/journal/8fee9192b0ae4819bcc3789b69ba3a83) is 5.9M, max 47.3M, 41.4M free. Jan 13 21:18:13.904678 systemd-modules-load[239]: Inserted module 'overlay' Jan 13 21:18:13.918645 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:18:13.920670 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:18:13.921092 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:18:13.923955 kernel: Bridge firewalling registered Jan 13 21:18:13.922761 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 13 21:18:13.923739 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:18:13.925933 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:18:13.934023 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:18:13.935274 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:18:13.937431 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:18:13.940757 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:18:13.946567 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:18:13.950054 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:18:13.956615 dracut-cmdline[274]: dracut-dracut-053 Jan 13 21:18:13.957857 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:18:13.965723 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:18:13.986381 systemd-resolved[283]: Positive Trust Anchors: Jan 13 21:18:13.986400 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:18:13.986433 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:18:13.992856 systemd-resolved[283]: Defaulting to hostname 'linux'. Jan 13 21:18:13.993924 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:18:13.996134 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:18:14.039631 kernel: SCSI subsystem initialized Jan 13 21:18:14.044613 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:18:14.051612 kernel: iscsi: registered transport (tcp) Jan 13 21:18:14.064638 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:18:14.064673 kernel: QLogic iSCSI HBA Driver Jan 13 21:18:14.109228 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:18:14.121826 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:18:14.140934 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:18:14.140978 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:18:14.140996 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:18:14.188640 kernel: raid6: neonx8 gen() 15780 MB/s Jan 13 21:18:14.205627 kernel: raid6: neonx4 gen() 15571 MB/s Jan 13 21:18:14.222616 kernel: raid6: neonx2 gen() 13268 MB/s Jan 13 21:18:14.239620 kernel: raid6: neonx1 gen() 10417 MB/s Jan 13 21:18:14.256614 kernel: raid6: int64x8 gen() 6969 MB/s Jan 13 21:18:14.273625 kernel: raid6: int64x4 gen() 7346 MB/s Jan 13 21:18:14.290617 kernel: raid6: int64x2 gen() 6115 MB/s Jan 13 21:18:14.307790 kernel: raid6: int64x1 gen() 5015 MB/s Jan 13 21:18:14.307813 kernel: raid6: using algorithm neonx8 gen() 15780 MB/s Jan 13 21:18:14.325769 kernel: raid6: .... xor() 11853 MB/s, rmw enabled Jan 13 21:18:14.325802 kernel: raid6: using neon recovery algorithm Jan 13 21:18:14.332039 kernel: xor: measuring software checksum speed Jan 13 21:18:14.332073 kernel: 8regs : 19816 MB/sec Jan 13 21:18:14.332083 kernel: 32regs : 19617 MB/sec Jan 13 21:18:14.332665 kernel: arm64_neon : 26656 MB/sec Jan 13 21:18:14.332680 kernel: xor: using function: arm64_neon (26656 MB/sec) Jan 13 21:18:14.383627 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:18:14.398683 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:18:14.411813 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:18:14.425009 systemd-udevd[463]: Using default interface naming scheme 'v255'. Jan 13 21:18:14.428258 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:18:14.438865 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:18:14.452315 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Jan 13 21:18:14.483105 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:18:14.494831 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:18:14.544951 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:18:14.553024 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:18:14.566838 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:18:14.568559 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:18:14.570434 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:18:14.572795 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:18:14.579916 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:18:14.592810 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:18:14.609247 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 13 21:18:14.628640 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:18:14.628751 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:18:14.628763 kernel: GPT:9289727 != 19775487 Jan 13 21:18:14.628772 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:18:14.628782 kernel: GPT:9289727 != 19775487 Jan 13 21:18:14.628791 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:18:14.628808 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:18:14.611849 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:18:14.611952 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:18:14.622127 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:18:14.625072 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:18:14.625210 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:18:14.629004 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:18:14.639890 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:18:14.657105 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (510) Jan 13 21:18:14.658623 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (507) Jan 13 21:18:14.661082 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:18:14.663707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:18:14.669798 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:18:14.680662 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:18:14.681989 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:18:14.689748 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:18:14.701745 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:18:14.704031 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:18:14.709987 disk-uuid[553]: Primary Header is updated. Jan 13 21:18:14.709987 disk-uuid[553]: Secondary Entries is updated. Jan 13 21:18:14.709987 disk-uuid[553]: Secondary Header is updated. Jan 13 21:18:14.713618 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:18:14.727862 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:18:15.727620 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:18:15.728210 disk-uuid[555]: The operation has completed successfully. Jan 13 21:18:15.749228 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:18:15.749348 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:18:15.774737 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:18:15.777691 sh[575]: Success Jan 13 21:18:15.790612 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 21:18:15.819633 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:18:15.835961 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:18:15.837661 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:18:15.850146 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 13 21:18:15.850212 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:18:15.850224 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:18:15.851037 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:18:15.851764 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:18:15.855500 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:18:15.856920 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:18:15.868752 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:18:15.870427 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:18:15.882188 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:18:15.882226 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:18:15.882238 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:18:15.884625 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:18:15.894976 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:18:15.896549 kernel: BTRFS info (device vda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:18:15.953623 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:18:15.965744 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:18:15.986749 systemd-networkd[754]: lo: Link UP Jan 13 21:18:15.986761 systemd-networkd[754]: lo: Gained carrier Jan 13 21:18:15.987417 systemd-networkd[754]: Enumeration completed Jan 13 21:18:15.987519 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:18:15.987902 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:18:15.987905 systemd-networkd[754]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:18:15.988688 systemd-networkd[754]: eth0: Link UP Jan 13 21:18:15.988690 systemd-networkd[754]: eth0: Gained carrier Jan 13 21:18:15.988697 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:18:15.988897 systemd[1]: Reached target network.target - Network. Jan 13 21:18:16.011642 systemd-networkd[754]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:18:16.035123 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:18:16.042814 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:18:16.144581 ignition[759]: Ignition 2.19.0 Jan 13 21:18:16.144608 ignition[759]: Stage: fetch-offline Jan 13 21:18:16.144646 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:18:16.144655 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:18:16.144814 ignition[759]: parsed url from cmdline: "" Jan 13 21:18:16.144817 ignition[759]: no config URL provided Jan 13 21:18:16.144821 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:18:16.144827 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:18:16.144851 ignition[759]: op(1): [started] loading QEMU firmware config module Jan 13 21:18:16.144855 ignition[759]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:18:16.153107 ignition[759]: op(1): [finished] loading QEMU firmware config module Jan 13 21:18:16.173570 ignition[759]: parsing config with SHA512: b2eddaa9e59eebfbe518aeb03f820b6fac7aaf62729c2ac27d673616179da6e4660eb0c2e7bdc72c97d4349eb2e00a8403aad68ff3a2d2fd0a0e8c7544f7dffb Jan 13 21:18:16.179113 unknown[759]: fetched base config from "system" Jan 13 21:18:16.179124 unknown[759]: fetched user config from "qemu" Jan 13 21:18:16.179536 ignition[759]: fetch-offline: fetch-offline passed Jan 13 21:18:16.181698 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:18:16.179613 ignition[759]: Ignition finished successfully Jan 13 21:18:16.184075 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:18:16.189799 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:18:16.199573 ignition[771]: Ignition 2.19.0 Jan 13 21:18:16.199583 ignition[771]: Stage: kargs Jan 13 21:18:16.199768 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:18:16.199778 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:18:16.202987 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:18:16.200583 ignition[771]: kargs: kargs passed Jan 13 21:18:16.200682 ignition[771]: Ignition finished successfully Jan 13 21:18:16.210778 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:18:16.219836 ignition[779]: Ignition 2.19.0 Jan 13 21:18:16.219846 ignition[779]: Stage: disks Jan 13 21:18:16.220000 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:18:16.220009 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:18:16.220829 ignition[779]: disks: disks passed Jan 13 21:18:16.220870 ignition[779]: Ignition finished successfully Jan 13 21:18:16.223663 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:18:16.225426 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:18:16.227074 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:18:16.229054 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:18:16.230958 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:18:16.232715 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:18:16.245757 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:18:16.255843 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:18:16.300661 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:18:16.313709 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:18:16.358627 kernel: EXT4-fs (vda9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 13 21:18:16.359150 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:18:16.360415 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:18:16.376914 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:18:16.378884 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:18:16.379843 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:18:16.379882 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:18:16.379902 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:18:16.386205 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:18:16.389345 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:18:16.393613 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) Jan 13 21:18:16.396545 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:18:16.396582 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:18:16.396604 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:18:16.401408 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:18:16.401372 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:18:16.439401 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:18:16.442408 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:18:16.447747 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:18:16.454497 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:18:16.527739 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:18:16.542752 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:18:16.544738 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:18:16.550621 kernel: BTRFS info (device vda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:18:16.570247 ignition[912]: INFO : Ignition 2.19.0 Jan 13 21:18:16.570247 ignition[912]: INFO : Stage: mount Jan 13 21:18:16.570247 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:18:16.575288 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:18:16.573191 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:18:16.577483 ignition[912]: INFO : mount: mount passed Jan 13 21:18:16.578681 ignition[912]: INFO : Ignition finished successfully Jan 13 21:18:16.580512 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:18:16.590707 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:18:16.847557 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:18:16.861763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:18:16.867615 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (925) Jan 13 21:18:16.870114 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:18:16.870157 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:18:16.870169 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:18:16.874617 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:18:16.875504 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:18:16.897411 ignition[942]: INFO : Ignition 2.19.0 Jan 13 21:18:16.897411 ignition[942]: INFO : Stage: files Jan 13 21:18:16.899257 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:18:16.899257 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:18:16.899257 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:18:16.902778 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:18:16.902778 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:18:16.905693 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:18:16.905693 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:18:16.905693 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:18:16.905693 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:18:16.905693 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 21:18:16.903559 unknown[942]: wrote ssh authorized keys file for user: core Jan 13 21:18:16.963955 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:18:17.184810 systemd-networkd[754]: eth0: Gained IPv6LL Jan 13 21:18:17.277496 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:18:17.277496 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:18:17.282139 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:18:17.282139 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:18:17.282139 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:18:17.282139 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:18:17.282139 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:18:17.282139 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:18:17.282139 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:18:17.282139 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:18:17.282139 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:18:17.282139 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 21:18:17.282139 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 21:18:17.282139 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 21:18:17.282139 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 13 21:18:17.609795 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 21:18:17.895349 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 21:18:17.895349 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 21:18:17.898845 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:18:17.898845 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:18:17.898845 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 21:18:17.898845 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 21:18:17.898845 ignition[942]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:18:17.898845 ignition[942]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:18:17.898845 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 21:18:17.898845 ignition[942]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:18:17.924351 ignition[942]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:18:17.927965 ignition[942]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:18:17.930667 ignition[942]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:18:17.930667 ignition[942]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:18:17.930667 ignition[942]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:18:17.930667 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:18:17.930667 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:18:17.930667 ignition[942]: INFO : files: files passed Jan 13 21:18:17.930667 ignition[942]: INFO : Ignition finished successfully Jan 13 21:18:17.932349 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:18:17.942734 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:18:17.945199 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:18:17.946534 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:18:17.946633 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:18:17.953003 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:18:17.958286 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:18:17.958286 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:18:17.961270 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:18:17.960208 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:18:17.962831 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:18:17.974822 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:18:17.994024 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:18:17.995043 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:18:17.996417 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:18:17.998299 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:18:18.000065 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:18:18.000768 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:18:18.015677 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:18:18.024746 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:18:18.032098 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:18:18.033337 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:18:18.035377 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:18:18.037144 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:18:18.037262 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:18:18.039732 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:18:18.041690 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:18:18.043364 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:18:18.045111 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:18:18.047031 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:18:18.048956 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:18:18.050774 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:18:18.052720 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:18:18.054677 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:18:18.056425 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:18:18.057951 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:18:18.058066 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:18:18.060336 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:18:18.061486 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:18:18.063395 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:18:18.064243 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:18:18.065502 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:18:18.065626 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:18:18.068277 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:18:18.068424 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:18:18.070838 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:18:18.072353 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:18:18.073630 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:18:18.075395 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:18:18.077161 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:18:18.078681 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:18:18.078802 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:18:18.080483 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:18:18.080614 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:18:18.082721 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:18:18.082863 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:18:18.084568 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:18:18.084723 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:18:18.094799 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:18:18.095686 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:18:18.095860 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:18:18.098455 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:18:18.099322 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:18:18.099498 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:18:18.101651 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:18:18.101807 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:18:18.108439 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:18:18.108529 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:18:18.112025 ignition[997]: INFO : Ignition 2.19.0 Jan 13 21:18:18.112025 ignition[997]: INFO : Stage: umount Jan 13 21:18:18.112025 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:18:18.112025 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:18:18.117492 ignition[997]: INFO : umount: umount passed Jan 13 21:18:18.117492 ignition[997]: INFO : Ignition finished successfully Jan 13 21:18:18.112645 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:18:18.114516 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:18:18.114676 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:18:18.116689 systemd[1]: Stopped target network.target - Network. Jan 13 21:18:18.118327 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:18:18.118390 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:18:18.120068 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:18:18.120112 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:18:18.121721 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:18:18.121763 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:18:18.123419 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:18:18.123463 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:18:18.125291 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:18:18.127795 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:18:18.135646 systemd-networkd[754]: eth0: DHCPv6 lease lost Jan 13 21:18:18.137256 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:18:18.137374 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:18:18.138818 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:18:18.138849 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:18:18.151712 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:18:18.152566 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:18:18.152643 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:18:18.154755 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:18:18.157664 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:18:18.157836 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:18:18.161748 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:18:18.161833 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:18:18.164840 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:18:18.164886 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:18:18.166841 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:18:18.166888 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:18:18.169266 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:18:18.169401 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:18:18.171417 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:18:18.171508 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:18:18.173563 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:18:18.173684 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:18:18.175665 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:18:18.175719 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:18:18.176845 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:18:18.176876 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:18:18.178889 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:18:18.178933 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:18:18.181556 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:18:18.181619 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:18:18.184150 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:18:18.184196 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:18:18.186986 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:18:18.187026 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:18:18.200730 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:18:18.201742 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:18:18.201794 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:18:18.203894 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:18:18.203937 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:18:18.206136 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:18:18.206230 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:18:18.208295 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:18:18.210401 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:18:18.219841 systemd[1]: Switching root. Jan 13 21:18:18.251554 systemd-journald[238]: Journal stopped Jan 13 21:18:18.939558 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 13 21:18:18.939630 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:18:18.939643 kernel: SELinux: policy capability open_perms=1 Jan 13 21:18:18.939656 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:18:18.939666 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:18:18.939679 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:18:18.939692 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:18:18.939701 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:18:18.939710 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:18:18.939720 kernel: audit: type=1403 audit(1736803098.388:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:18:18.939731 systemd[1]: Successfully loaded SELinux policy in 30.551ms. Jan 13 21:18:18.939751 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.251ms. Jan 13 21:18:18.939762 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:18:18.939774 systemd[1]: Detected virtualization kvm. Jan 13 21:18:18.939786 systemd[1]: Detected architecture arm64. Jan 13 21:18:18.939796 systemd[1]: Detected first boot. Jan 13 21:18:18.939806 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:18:18.939817 zram_generator::config[1041]: No configuration found. Jan 13 21:18:18.939827 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:18:18.939838 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:18:18.939848 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:18:18.939860 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:18:18.939873 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:18:18.939884 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:18:18.939894 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:18:18.939905 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:18:18.939915 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:18:18.939926 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:18:18.939936 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:18:18.939947 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:18:18.939957 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:18:18.939970 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:18:18.939981 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:18:18.939991 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:18:18.940002 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:18:18.940012 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:18:18.940023 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 21:18:18.940033 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:18:18.940043 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:18:18.940054 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:18:18.940066 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:18:18.940077 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:18:18.940088 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:18:18.940099 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:18:18.940109 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:18:18.940129 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:18:18.940141 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:18:18.940152 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:18:18.940165 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:18:18.940176 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:18:18.940187 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:18:18.940197 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:18:18.940208 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:18:18.940218 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:18:18.940228 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:18:18.940238 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:18:18.940249 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:18:18.940261 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:18:18.940272 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:18:18.940283 systemd[1]: Reached target machines.target - Containers. Jan 13 21:18:18.940293 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:18:18.940304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:18:18.940315 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:18:18.940326 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:18:18.940337 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:18:18.940349 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:18:18.940359 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:18:18.940370 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:18:18.940381 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:18:18.940392 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:18:18.940402 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:18:18.940413 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:18:18.940423 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:18:18.940435 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:18:18.940445 kernel: fuse: init (API version 7.39) Jan 13 21:18:18.940454 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:18:18.940465 kernel: loop: module loaded Jan 13 21:18:18.940474 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:18:18.940485 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:18:18.940495 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:18:18.940505 kernel: ACPI: bus type drm_connector registered Jan 13 21:18:18.940529 systemd-journald[1108]: Collecting audit messages is disabled. Jan 13 21:18:18.940553 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:18:18.940564 systemd-journald[1108]: Journal started Jan 13 21:18:18.940586 systemd-journald[1108]: Runtime Journal (/run/log/journal/8fee9192b0ae4819bcc3789b69ba3a83) is 5.9M, max 47.3M, 41.4M free. Jan 13 21:18:18.744104 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:18:18.760985 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:18:18.761327 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:18:18.943069 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:18:18.943099 systemd[1]: Stopped verity-setup.service. Jan 13 21:18:18.946375 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:18:18.947019 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:18:18.948180 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:18:18.949421 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:18:18.950523 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:18:18.951824 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:18:18.953079 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:18:18.955623 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:18:18.956972 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:18:18.958489 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:18:18.958651 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:18:18.960019 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:18:18.960173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:18:18.962909 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:18:18.963053 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:18:18.964356 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:18:18.964491 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:18:18.965947 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:18:18.966088 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:18:18.967433 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:18:18.967569 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:18:18.968913 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:18:18.970243 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:18:18.971899 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:18:18.983767 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:18:18.998691 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:18:19.000695 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:18:19.001772 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:18:19.001813 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:18:19.003742 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:18:19.005875 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:18:19.007938 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:18:19.009056 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:18:19.010409 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:18:19.012338 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:18:19.013613 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:18:19.016777 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:18:19.020255 systemd-journald[1108]: Time spent on flushing to /var/log/journal/8fee9192b0ae4819bcc3789b69ba3a83 is 51.084ms for 850 entries. Jan 13 21:18:19.020255 systemd-journald[1108]: System Journal (/var/log/journal/8fee9192b0ae4819bcc3789b69ba3a83) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:18:19.082033 systemd-journald[1108]: Received client request to flush runtime journal. Jan 13 21:18:19.082084 kernel: loop0: detected capacity change from 0 to 114328 Jan 13 21:18:19.082101 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:18:19.019449 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:18:19.021852 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:18:19.024875 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:18:19.028824 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:18:19.031391 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:18:19.032943 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:18:19.034428 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:18:19.037723 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:18:19.040005 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:18:19.047140 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:18:19.052778 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:18:19.055325 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:18:19.057039 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:18:19.072840 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:18:19.079662 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:18:19.085641 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:18:19.089296 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:18:19.089304 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:18:19.090573 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:18:19.100661 kernel: loop1: detected capacity change from 0 to 114432 Jan 13 21:18:19.108058 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Jan 13 21:18:19.108077 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Jan 13 21:18:19.112452 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:18:19.135624 kernel: loop2: detected capacity change from 0 to 189592 Jan 13 21:18:19.191690 kernel: loop3: detected capacity change from 0 to 114328 Jan 13 21:18:19.197627 kernel: loop4: detected capacity change from 0 to 114432 Jan 13 21:18:19.203565 kernel: loop5: detected capacity change from 0 to 189592 Jan 13 21:18:19.206579 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:18:19.206958 (sd-merge)[1178]: Merged extensions into '/usr'. Jan 13 21:18:19.213276 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:18:19.213292 systemd[1]: Reloading... Jan 13 21:18:19.262727 zram_generator::config[1201]: No configuration found. Jan 13 21:18:19.283703 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:18:19.362962 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:18:19.402005 systemd[1]: Reloading finished in 188 ms. Jan 13 21:18:19.429192 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:18:19.430752 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:18:19.445754 systemd[1]: Starting ensure-sysext.service... Jan 13 21:18:19.447574 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:18:19.463642 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:18:19.463658 systemd[1]: Reloading... Jan 13 21:18:19.466774 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:18:19.467021 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:18:19.467671 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:18:19.467882 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 13 21:18:19.467929 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jan 13 21:18:19.470486 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:18:19.470581 systemd-tmpfiles[1239]: Skipping /boot Jan 13 21:18:19.477412 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:18:19.477511 systemd-tmpfiles[1239]: Skipping /boot Jan 13 21:18:19.508686 zram_generator::config[1263]: No configuration found. Jan 13 21:18:19.593299 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:18:19.632694 systemd[1]: Reloading finished in 168 ms. Jan 13 21:18:19.649636 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:18:19.661039 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:18:19.668838 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:18:19.671388 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:18:19.673565 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:18:19.677865 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:18:19.680854 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:18:19.685847 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:18:19.689662 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:18:19.690739 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:18:19.693922 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:18:19.698673 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:18:19.699966 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:18:19.705264 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:18:19.707487 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:18:19.710402 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:18:19.710524 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:18:19.712322 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:18:19.712474 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:18:19.714354 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:18:19.714471 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:18:19.722942 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:18:19.738878 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:18:19.741883 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:18:19.742570 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Jan 13 21:18:19.746768 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:18:19.747857 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:18:19.749384 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:18:19.753705 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:18:19.756047 augenrules[1334]: No rules Jan 13 21:18:19.766189 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:18:19.768074 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:18:19.772295 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:18:19.774220 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:18:19.774372 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:18:19.776154 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:18:19.776666 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:18:19.778754 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:18:19.778891 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:18:19.781226 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:18:19.786423 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:18:19.797119 systemd[1]: Finished ensure-sysext.service. Jan 13 21:18:19.804264 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:18:19.810812 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:18:19.813879 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:18:19.815871 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:18:19.818668 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:18:19.819753 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:18:19.822403 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:18:19.833140 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1349) Jan 13 21:18:19.836130 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:18:19.837788 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:18:19.838235 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:18:19.839946 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:18:19.842082 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:18:19.842224 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:18:19.845348 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:18:19.845472 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:18:19.847013 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:18:19.847154 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:18:19.849773 systemd-resolved[1306]: Positive Trust Anchors: Jan 13 21:18:19.850874 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:18:19.850959 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:18:19.859521 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 21:18:19.863191 systemd-resolved[1306]: Defaulting to hostname 'linux'. Jan 13 21:18:19.868301 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:18:19.872014 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:18:19.873606 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:18:19.881762 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:18:19.883218 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:18:19.883276 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:18:19.906687 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:18:19.908852 systemd-networkd[1376]: lo: Link UP Jan 13 21:18:19.909073 systemd-networkd[1376]: lo: Gained carrier Jan 13 21:18:19.909326 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:18:19.909960 systemd-networkd[1376]: Enumeration completed Jan 13 21:18:19.910983 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:18:19.910990 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:18:19.911146 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:18:19.912499 systemd-networkd[1376]: eth0: Link UP Jan 13 21:18:19.912568 systemd-networkd[1376]: eth0: Gained carrier Jan 13 21:18:19.912633 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:18:19.912952 systemd[1]: Reached target network.target - Network. Jan 13 21:18:19.914196 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:18:19.922057 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:18:19.925661 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:18:19.926483 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Jan 13 21:18:20.417101 systemd-resolved[1306]: Clock change detected. Flushing caches. Jan 13 21:18:20.417220 systemd-timesyncd[1377]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:18:20.417282 systemd-timesyncd[1377]: Initial clock synchronization to Mon 2025-01-13 21:18:20.417059 UTC. Jan 13 21:18:20.439679 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:18:20.448836 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:18:20.451796 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:18:20.468128 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:18:20.478530 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:18:20.497804 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:18:20.499222 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:18:20.500367 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:18:20.501533 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:18:20.502751 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:18:20.504117 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:18:20.505300 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:18:20.506567 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:18:20.507750 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:18:20.507786 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:18:20.508660 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:18:20.510240 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:18:20.512640 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:18:20.520468 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:18:20.522620 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:18:20.524148 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:18:20.525290 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:18:20.526244 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:18:20.527196 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:18:20.527231 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:18:20.528105 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:18:20.529868 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:18:20.531835 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:18:20.533619 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:18:20.536667 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:18:20.540589 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:18:20.541590 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:18:20.544754 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:18:20.546184 jq[1411]: false Jan 13 21:18:20.546656 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:18:20.549854 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:18:20.556656 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:18:20.562966 extend-filesystems[1412]: Found loop3 Jan 13 21:18:20.563881 extend-filesystems[1412]: Found loop4 Jan 13 21:18:20.563881 extend-filesystems[1412]: Found loop5 Jan 13 21:18:20.563881 extend-filesystems[1412]: Found vda Jan 13 21:18:20.563881 extend-filesystems[1412]: Found vda1 Jan 13 21:18:20.563881 extend-filesystems[1412]: Found vda2 Jan 13 21:18:20.563881 extend-filesystems[1412]: Found vda3 Jan 13 21:18:20.563881 extend-filesystems[1412]: Found usr Jan 13 21:18:20.563881 extend-filesystems[1412]: Found vda4 Jan 13 21:18:20.563881 extend-filesystems[1412]: Found vda6 Jan 13 21:18:20.563881 extend-filesystems[1412]: Found vda7 Jan 13 21:18:20.563881 extend-filesystems[1412]: Found vda9 Jan 13 21:18:20.563881 extend-filesystems[1412]: Checking size of /dev/vda9 Jan 13 21:18:20.598713 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:18:20.598741 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1355) Jan 13 21:18:20.563155 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:18:20.598844 extend-filesystems[1412]: Resized partition /dev/vda9 Jan 13 21:18:20.574956 dbus-daemon[1410]: [system] SELinux support is enabled Jan 13 21:18:20.563563 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:18:20.602120 extend-filesystems[1434]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:18:20.564145 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:18:20.566844 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:18:20.604878 jq[1429]: true Jan 13 21:18:20.568743 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:18:20.573934 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:18:20.574120 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:18:20.574379 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:18:20.574523 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:18:20.584672 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:18:20.598804 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:18:20.598990 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:18:20.615577 update_engine[1427]: I20250113 21:18:20.614228 1427 main.cc:92] Flatcar Update Engine starting Jan 13 21:18:20.620749 update_engine[1427]: I20250113 21:18:20.620640 1427 update_check_scheduler.cc:74] Next update check in 8m51s Jan 13 21:18:20.623503 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:18:20.620912 (ntainerd)[1438]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:18:20.627966 jq[1437]: true Jan 13 21:18:20.633333 tar[1435]: linux-arm64/helm Jan 13 21:18:20.637001 extend-filesystems[1434]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:18:20.637001 extend-filesystems[1434]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:18:20.637001 extend-filesystems[1434]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:18:20.636995 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:18:20.650925 extend-filesystems[1412]: Resized filesystem in /dev/vda9 Jan 13 21:18:20.638330 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:18:20.638354 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:18:20.644641 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:18:20.644659 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:18:20.657665 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:18:20.659186 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:18:20.659577 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:18:20.667227 systemd-logind[1420]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 21:18:20.667414 systemd-logind[1420]: New seat seat0. Jan 13 21:18:20.669472 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:18:20.686478 bash[1466]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:18:20.689879 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:18:20.693108 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:18:20.714985 locksmithd[1456]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:18:20.830511 containerd[1438]: time="2025-01-13T21:18:20.829797034Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:18:20.863630 containerd[1438]: time="2025-01-13T21:18:20.863544794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:18:20.864953 containerd[1438]: time="2025-01-13T21:18:20.864919354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:18:20.864990 containerd[1438]: time="2025-01-13T21:18:20.864956234Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:18:20.864990 containerd[1438]: time="2025-01-13T21:18:20.864973034Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:18:20.866051 containerd[1438]: time="2025-01-13T21:18:20.865115594Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:18:20.866051 containerd[1438]: time="2025-01-13T21:18:20.865140714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:18:20.866051 containerd[1438]: time="2025-01-13T21:18:20.865189314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:18:20.866051 containerd[1438]: time="2025-01-13T21:18:20.865201114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:18:20.866051 containerd[1438]: time="2025-01-13T21:18:20.865344634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:18:20.866051 containerd[1438]: time="2025-01-13T21:18:20.865359074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:18:20.866051 containerd[1438]: time="2025-01-13T21:18:20.865372874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:18:20.866051 containerd[1438]: time="2025-01-13T21:18:20.865382474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:18:20.866051 containerd[1438]: time="2025-01-13T21:18:20.865445274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:18:20.866051 containerd[1438]: time="2025-01-13T21:18:20.865641394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:18:20.866051 containerd[1438]: time="2025-01-13T21:18:20.865732114Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:18:20.866352 containerd[1438]: time="2025-01-13T21:18:20.865778754Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:18:20.866352 containerd[1438]: time="2025-01-13T21:18:20.865852714Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:18:20.866352 containerd[1438]: time="2025-01-13T21:18:20.865890754Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:18:20.872689 containerd[1438]: time="2025-01-13T21:18:20.872512074Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:18:20.872689 containerd[1438]: time="2025-01-13T21:18:20.872562354Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:18:20.872689 containerd[1438]: time="2025-01-13T21:18:20.872578394Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:18:20.872689 containerd[1438]: time="2025-01-13T21:18:20.872592194Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:18:20.872689 containerd[1438]: time="2025-01-13T21:18:20.872605594Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:18:20.873071 containerd[1438]: time="2025-01-13T21:18:20.872895114Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:18:20.873208 containerd[1438]: time="2025-01-13T21:18:20.873176234Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:18:20.873340 containerd[1438]: time="2025-01-13T21:18:20.873319954Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:18:20.873366 containerd[1438]: time="2025-01-13T21:18:20.873344314Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:18:20.873366 containerd[1438]: time="2025-01-13T21:18:20.873358554Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:18:20.873404 containerd[1438]: time="2025-01-13T21:18:20.873372914Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:18:20.873404 containerd[1438]: time="2025-01-13T21:18:20.873385914Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:18:20.873404 containerd[1438]: time="2025-01-13T21:18:20.873398754Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:18:20.873458 containerd[1438]: time="2025-01-13T21:18:20.873413394Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:18:20.873458 containerd[1438]: time="2025-01-13T21:18:20.873427794Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:18:20.873458 containerd[1438]: time="2025-01-13T21:18:20.873440314Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:18:20.873458 containerd[1438]: time="2025-01-13T21:18:20.873452154Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:18:20.873557 containerd[1438]: time="2025-01-13T21:18:20.873463034Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:18:20.873635 containerd[1438]: time="2025-01-13T21:18:20.873614954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:18:20.873665 containerd[1438]: time="2025-01-13T21:18:20.873641994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:18:20.873665 containerd[1438]: time="2025-01-13T21:18:20.873657594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:18:20.873704 containerd[1438]: time="2025-01-13T21:18:20.873670114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:18:20.873704 containerd[1438]: time="2025-01-13T21:18:20.873683074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:18:20.873704 containerd[1438]: time="2025-01-13T21:18:20.873697634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:18:20.873758 containerd[1438]: time="2025-01-13T21:18:20.873710594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:18:20.873758 containerd[1438]: time="2025-01-13T21:18:20.873723794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:18:20.873805 containerd[1438]: time="2025-01-13T21:18:20.873736634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:18:20.873825 containerd[1438]: time="2025-01-13T21:18:20.873813434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:18:20.873843 containerd[1438]: time="2025-01-13T21:18:20.873828754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:18:20.873865 containerd[1438]: time="2025-01-13T21:18:20.873841634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:18:20.873865 containerd[1438]: time="2025-01-13T21:18:20.873854834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:18:20.873899 containerd[1438]: time="2025-01-13T21:18:20.873870754Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:18:20.873953 containerd[1438]: time="2025-01-13T21:18:20.873896514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:18:20.873973 containerd[1438]: time="2025-01-13T21:18:20.873958994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:18:20.874002 containerd[1438]: time="2025-01-13T21:18:20.873981914Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:18:20.874108 containerd[1438]: time="2025-01-13T21:18:20.874095354Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:18:20.874138 containerd[1438]: time="2025-01-13T21:18:20.874115674Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:18:20.874294 containerd[1438]: time="2025-01-13T21:18:20.874127874Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:18:20.874294 containerd[1438]: time="2025-01-13T21:18:20.874207594Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:18:20.874294 containerd[1438]: time="2025-01-13T21:18:20.874220074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:18:20.874294 containerd[1438]: time="2025-01-13T21:18:20.874233194Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:18:20.874294 containerd[1438]: time="2025-01-13T21:18:20.874243354Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:18:20.874294 containerd[1438]: time="2025-01-13T21:18:20.874254314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:18:20.874746 containerd[1438]: time="2025-01-13T21:18:20.874685394Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:18:20.874846 containerd[1438]: time="2025-01-13T21:18:20.874754114Z" level=info msg="Connect containerd service" Jan 13 21:18:20.874846 containerd[1438]: time="2025-01-13T21:18:20.874834354Z" level=info msg="using legacy CRI server" Jan 13 21:18:20.874846 containerd[1438]: time="2025-01-13T21:18:20.874842354Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:18:20.874998 containerd[1438]: time="2025-01-13T21:18:20.874921914Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:18:20.875804 containerd[1438]: time="2025-01-13T21:18:20.875776594Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:18:20.876099 containerd[1438]: time="2025-01-13T21:18:20.876063434Z" level=info msg="Start subscribing containerd event" Jan 13 21:18:20.876133 containerd[1438]: time="2025-01-13T21:18:20.876116714Z" level=info msg="Start recovering state" Jan 13 21:18:20.876203 containerd[1438]: time="2025-01-13T21:18:20.876187714Z" level=info msg="Start event monitor" Jan 13 21:18:20.876228 containerd[1438]: time="2025-01-13T21:18:20.876204314Z" level=info msg="Start snapshots syncer" Jan 13 21:18:20.876228 containerd[1438]: time="2025-01-13T21:18:20.876213234Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:18:20.876228 containerd[1438]: time="2025-01-13T21:18:20.876220354Z" level=info msg="Start streaming server" Jan 13 21:18:20.878156 containerd[1438]: time="2025-01-13T21:18:20.878128274Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:18:20.878391 containerd[1438]: time="2025-01-13T21:18:20.878274314Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:18:20.878391 containerd[1438]: time="2025-01-13T21:18:20.878373914Z" level=info msg="containerd successfully booted in 0.050145s" Jan 13 21:18:20.878466 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:18:20.902588 sshd_keygen[1433]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:18:20.921560 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:18:20.935714 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:18:20.942232 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:18:20.942408 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:18:20.945067 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:18:20.958614 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:18:20.961248 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:18:20.964811 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 21:18:20.968219 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:18:20.986476 tar[1435]: linux-arm64/LICENSE Jan 13 21:18:20.986576 tar[1435]: linux-arm64/README.md Jan 13 21:18:21.000732 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:18:22.409703 systemd-networkd[1376]: eth0: Gained IPv6LL Jan 13 21:18:22.412181 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:18:22.414036 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:18:22.421690 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:18:22.424015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:18:22.426092 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:18:22.440423 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:18:22.440584 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:18:22.442565 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:18:22.447564 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:18:22.910770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:18:22.912355 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:18:22.914889 (kubelet)[1522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:18:22.916575 systemd[1]: Startup finished in 549ms (kernel) + 4.681s (initrd) + 4.075s (userspace) = 9.306s. Jan 13 21:18:23.361551 kubelet[1522]: E0113 21:18:23.361423 1522 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:18:23.364109 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:18:23.364260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:18:26.971336 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:18:26.972548 systemd[1]: Started sshd@0-10.0.0.74:22-10.0.0.1:38544.service - OpenSSH per-connection server daemon (10.0.0.1:38544). Jan 13 21:18:27.025609 sshd[1535]: Accepted publickey for core from 10.0.0.1 port 38544 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:18:27.027508 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:27.046682 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:18:27.057149 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:18:27.063305 systemd-logind[1420]: New session 1 of user core. Jan 13 21:18:27.068525 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:18:27.084853 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:18:27.087586 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:18:27.162890 systemd[1540]: Queued start job for default target default.target. Jan 13 21:18:27.177521 systemd[1540]: Created slice app.slice - User Application Slice. Jan 13 21:18:27.177566 systemd[1540]: Reached target paths.target - Paths. Jan 13 21:18:27.177578 systemd[1540]: Reached target timers.target - Timers. Jan 13 21:18:27.178874 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:18:27.189111 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:18:27.189176 systemd[1540]: Reached target sockets.target - Sockets. Jan 13 21:18:27.189188 systemd[1540]: Reached target basic.target - Basic System. Jan 13 21:18:27.189253 systemd[1540]: Reached target default.target - Main User Target. Jan 13 21:18:27.189281 systemd[1540]: Startup finished in 95ms. Jan 13 21:18:27.189552 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:18:27.191074 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:18:27.253203 systemd[1]: Started sshd@1-10.0.0.74:22-10.0.0.1:38546.service - OpenSSH per-connection server daemon (10.0.0.1:38546). Jan 13 21:18:27.292222 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 38546 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:18:27.293721 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:27.298556 systemd-logind[1420]: New session 2 of user core. Jan 13 21:18:27.304644 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:18:27.356589 sshd[1551]: pam_unix(sshd:session): session closed for user core Jan 13 21:18:27.369890 systemd[1]: sshd@1-10.0.0.74:22-10.0.0.1:38546.service: Deactivated successfully. Jan 13 21:18:27.372766 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:18:27.373966 systemd-logind[1420]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:18:27.387843 systemd[1]: Started sshd@2-10.0.0.74:22-10.0.0.1:38558.service - OpenSSH per-connection server daemon (10.0.0.1:38558). Jan 13 21:18:27.388736 systemd-logind[1420]: Removed session 2. Jan 13 21:18:27.421924 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 38558 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:18:27.423123 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:27.427218 systemd-logind[1420]: New session 3 of user core. Jan 13 21:18:27.437632 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:18:27.486328 sshd[1558]: pam_unix(sshd:session): session closed for user core Jan 13 21:18:27.503968 systemd[1]: sshd@2-10.0.0.74:22-10.0.0.1:38558.service: Deactivated successfully. Jan 13 21:18:27.505397 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:18:27.507728 systemd-logind[1420]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:18:27.508909 systemd[1]: Started sshd@3-10.0.0.74:22-10.0.0.1:38574.service - OpenSSH per-connection server daemon (10.0.0.1:38574). Jan 13 21:18:27.509665 systemd-logind[1420]: Removed session 3. Jan 13 21:18:27.547199 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 38574 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:18:27.548602 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:27.553174 systemd-logind[1420]: New session 4 of user core. Jan 13 21:18:27.563652 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:18:27.616914 sshd[1565]: pam_unix(sshd:session): session closed for user core Jan 13 21:18:27.625903 systemd[1]: sshd@3-10.0.0.74:22-10.0.0.1:38574.service: Deactivated successfully. Jan 13 21:18:27.628715 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:18:27.629919 systemd-logind[1420]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:18:27.631019 systemd[1]: Started sshd@4-10.0.0.74:22-10.0.0.1:38588.service - OpenSSH per-connection server daemon (10.0.0.1:38588). Jan 13 21:18:27.631749 systemd-logind[1420]: Removed session 4. Jan 13 21:18:27.668976 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 38588 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:18:27.670195 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:18:27.674334 systemd-logind[1420]: New session 5 of user core. Jan 13 21:18:27.686659 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:18:27.749621 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:18:27.749917 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:18:28.092746 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:18:28.092837 (dockerd)[1593]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:18:28.382387 dockerd[1593]: time="2025-01-13T21:18:28.382001954Z" level=info msg="Starting up" Jan 13 21:18:28.551814 dockerd[1593]: time="2025-01-13T21:18:28.551770834Z" level=info msg="Loading containers: start." Jan 13 21:18:28.696508 kernel: Initializing XFRM netlink socket Jan 13 21:18:28.760215 systemd-networkd[1376]: docker0: Link UP Jan 13 21:18:28.778611 dockerd[1593]: time="2025-01-13T21:18:28.778571314Z" level=info msg="Loading containers: done." Jan 13 21:18:28.796448 dockerd[1593]: time="2025-01-13T21:18:28.796291634Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:18:28.796448 dockerd[1593]: time="2025-01-13T21:18:28.796410874Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:18:28.796676 dockerd[1593]: time="2025-01-13T21:18:28.796544154Z" level=info msg="Daemon has completed initialization" Jan 13 21:18:28.854384 dockerd[1593]: time="2025-01-13T21:18:28.853991874Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:18:28.854685 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:18:29.483626 containerd[1438]: time="2025-01-13T21:18:29.483335554Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 21:18:30.325375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1824734448.mount: Deactivated successfully. Jan 13 21:18:32.069146 containerd[1438]: time="2025-01-13T21:18:32.069078714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:32.069621 containerd[1438]: time="2025-01-13T21:18:32.069587874Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615587" Jan 13 21:18:32.070872 containerd[1438]: time="2025-01-13T21:18:32.070274314Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:32.073313 containerd[1438]: time="2025-01-13T21:18:32.073249714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:32.074384 containerd[1438]: time="2025-01-13T21:18:32.074352434Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 2.59097196s" Jan 13 21:18:32.074442 containerd[1438]: time="2025-01-13T21:18:32.074390594Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Jan 13 21:18:32.075389 containerd[1438]: time="2025-01-13T21:18:32.075192994Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 21:18:33.614660 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:18:33.628678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:18:33.716651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:18:33.720439 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:18:33.842375 kubelet[1807]: E0113 21:18:33.842276 1807 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:18:33.845268 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:18:33.845417 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:18:34.075064 containerd[1438]: time="2025-01-13T21:18:34.074953434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:34.076224 containerd[1438]: time="2025-01-13T21:18:34.076178234Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470098" Jan 13 21:18:34.077511 containerd[1438]: time="2025-01-13T21:18:34.077257234Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:34.083515 containerd[1438]: time="2025-01-13T21:18:34.082861394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:34.084989 containerd[1438]: time="2025-01-13T21:18:34.084954754Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 2.00970428s" Jan 13 21:18:34.085063 containerd[1438]: time="2025-01-13T21:18:34.084990274Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Jan 13 21:18:34.085446 containerd[1438]: time="2025-01-13T21:18:34.085404394Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 21:18:35.596941 containerd[1438]: time="2025-01-13T21:18:35.596893394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:35.597851 containerd[1438]: time="2025-01-13T21:18:35.597590234Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024204" Jan 13 21:18:35.598750 containerd[1438]: time="2025-01-13T21:18:35.598685514Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:35.601471 containerd[1438]: time="2025-01-13T21:18:35.601444114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:35.602722 containerd[1438]: time="2025-01-13T21:18:35.602681554Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 1.51724784s" Jan 13 21:18:35.602722 containerd[1438]: time="2025-01-13T21:18:35.602715234Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Jan 13 21:18:35.603881 containerd[1438]: time="2025-01-13T21:18:35.603842754Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 21:18:36.677908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1663224247.mount: Deactivated successfully. Jan 13 21:18:37.286752 containerd[1438]: time="2025-01-13T21:18:37.286692834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:37.287556 containerd[1438]: time="2025-01-13T21:18:37.287517794Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771428" Jan 13 21:18:37.288864 containerd[1438]: time="2025-01-13T21:18:37.288827634Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:37.290842 containerd[1438]: time="2025-01-13T21:18:37.290811274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:37.291502 containerd[1438]: time="2025-01-13T21:18:37.291432714Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.68754372s" Jan 13 21:18:37.291502 containerd[1438]: time="2025-01-13T21:18:37.291462794Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Jan 13 21:18:37.291992 containerd[1438]: time="2025-01-13T21:18:37.291964594Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:18:37.947980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1606998830.mount: Deactivated successfully. Jan 13 21:18:38.812523 containerd[1438]: time="2025-01-13T21:18:38.812460194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:38.815966 containerd[1438]: time="2025-01-13T21:18:38.815925234Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 13 21:18:38.817030 containerd[1438]: time="2025-01-13T21:18:38.816992554Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:38.820344 containerd[1438]: time="2025-01-13T21:18:38.820294034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:38.821649 containerd[1438]: time="2025-01-13T21:18:38.821555514Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.52955776s" Jan 13 21:18:38.821649 containerd[1438]: time="2025-01-13T21:18:38.821593154Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 21:18:38.822125 containerd[1438]: time="2025-01-13T21:18:38.822023114Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 21:18:39.301932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2124516074.mount: Deactivated successfully. Jan 13 21:18:39.306290 containerd[1438]: time="2025-01-13T21:18:39.306230714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:39.306755 containerd[1438]: time="2025-01-13T21:18:39.306719914Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 13 21:18:39.307558 containerd[1438]: time="2025-01-13T21:18:39.307529594Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:39.310435 containerd[1438]: time="2025-01-13T21:18:39.310381834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:39.311524 containerd[1438]: time="2025-01-13T21:18:39.311166834Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 488.8946ms" Jan 13 21:18:39.311524 containerd[1438]: time="2025-01-13T21:18:39.311200474Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 13 21:18:39.311707 containerd[1438]: time="2025-01-13T21:18:39.311600754Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 21:18:40.046680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount105130576.mount: Deactivated successfully. Jan 13 21:18:42.684088 containerd[1438]: time="2025-01-13T21:18:42.684028914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:42.685157 containerd[1438]: time="2025-01-13T21:18:42.685084114Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Jan 13 21:18:42.685915 containerd[1438]: time="2025-01-13T21:18:42.685882714Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:42.692405 containerd[1438]: time="2025-01-13T21:18:42.692319114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:18:42.693744 containerd[1438]: time="2025-01-13T21:18:42.693696954Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.38206624s" Jan 13 21:18:42.693744 containerd[1438]: time="2025-01-13T21:18:42.693740554Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 13 21:18:44.095871 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:18:44.107016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:18:44.194668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:18:44.196698 (kubelet)[1964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:18:44.230466 kubelet[1964]: E0113 21:18:44.230368 1964 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:18:44.232943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:18:44.233083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:18:46.523448 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:18:46.535855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:18:46.560945 systemd[1]: Reloading requested from client PID 1979 ('systemctl') (unit session-5.scope)... Jan 13 21:18:46.560963 systemd[1]: Reloading... Jan 13 21:18:46.619605 zram_generator::config[2018]: No configuration found. Jan 13 21:18:46.728226 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:18:46.785599 systemd[1]: Reloading finished in 224 ms. Jan 13 21:18:46.829650 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:18:46.829725 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:18:46.830580 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:18:46.832792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:18:46.921560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:18:46.927174 (kubelet)[2064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:18:46.968644 kubelet[2064]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:18:46.968644 kubelet[2064]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:18:46.968644 kubelet[2064]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:18:46.968980 kubelet[2064]: I0113 21:18:46.968815 2064 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:18:48.686739 kubelet[2064]: I0113 21:18:48.686676 2064 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:18:48.686739 kubelet[2064]: I0113 21:18:48.686714 2064 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:18:48.687114 kubelet[2064]: I0113 21:18:48.686987 2064 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:18:48.723371 kubelet[2064]: E0113 21:18:48.723336 2064 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:18:48.724192 kubelet[2064]: I0113 21:18:48.724101 2064 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:18:48.735111 kubelet[2064]: E0113 21:18:48.735059 2064 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:18:48.735111 kubelet[2064]: I0113 21:18:48.735096 2064 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:18:48.738575 kubelet[2064]: I0113 21:18:48.738473 2064 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:18:48.738841 kubelet[2064]: I0113 21:18:48.738828 2064 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:18:48.738959 kubelet[2064]: I0113 21:18:48.738939 2064 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:18:48.739236 kubelet[2064]: I0113 21:18:48.738959 2064 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:18:48.739335 kubelet[2064]: I0113 21:18:48.739259 2064 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:18:48.739335 kubelet[2064]: I0113 21:18:48.739270 2064 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:18:48.739453 kubelet[2064]: I0113 21:18:48.739438 2064 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:18:48.740947 kubelet[2064]: I0113 21:18:48.740879 2064 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:18:48.740947 kubelet[2064]: I0113 21:18:48.740910 2064 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:18:48.741922 kubelet[2064]: I0113 21:18:48.741549 2064 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:18:48.741922 kubelet[2064]: I0113 21:18:48.741590 2064 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:18:48.743679 kubelet[2064]: I0113 21:18:48.743660 2064 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:18:48.745589 kubelet[2064]: I0113 21:18:48.745567 2064 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:18:48.746212 kubelet[2064]: W0113 21:18:48.746195 2064 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:18:48.746941 kubelet[2064]: I0113 21:18:48.746920 2064 server.go:1269] "Started kubelet" Jan 13 21:18:48.747266 kubelet[2064]: W0113 21:18:48.747217 2064 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jan 13 21:18:48.747318 kubelet[2064]: E0113 21:18:48.747275 2064 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:18:48.747942 kubelet[2064]: I0113 21:18:48.747912 2064 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:18:48.748830 kubelet[2064]: W0113 21:18:48.748723 2064 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jan 13 21:18:48.748830 kubelet[2064]: E0113 21:18:48.748781 2064 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:18:48.749225 kubelet[2064]: I0113 21:18:48.749207 2064 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:18:48.750285 kubelet[2064]: I0113 21:18:48.750237 2064 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:18:48.750689 kubelet[2064]: I0113 21:18:48.750588 2064 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:18:48.750761 kubelet[2064]: I0113 21:18:48.750692 2064 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:18:48.750892 kubelet[2064]: I0113 21:18:48.750875 2064 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:18:48.752960 kubelet[2064]: I0113 21:18:48.752924 2064 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:18:48.753056 kubelet[2064]: I0113 21:18:48.753038 2064 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:18:48.753107 kubelet[2064]: I0113 21:18:48.753093 2064 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:18:48.753392 kubelet[2064]: E0113 21:18:48.753373 2064 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:18:48.753782 kubelet[2064]: W0113 21:18:48.753361 2064 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jan 13 21:18:48.753782 kubelet[2064]: E0113 21:18:48.753498 2064 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:18:48.753782 kubelet[2064]: E0113 21:18:48.753425 2064 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="200ms" Jan 13 21:18:48.754840 kubelet[2064]: E0113 21:18:48.754818 2064 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:18:48.754923 kubelet[2064]: I0113 21:18:48.754873 2064 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:18:48.754973 kubelet[2064]: I0113 21:18:48.754964 2064 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:18:48.755116 kubelet[2064]: I0113 21:18:48.755094 2064 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:18:48.756196 kubelet[2064]: E0113 21:18:48.755245 2064 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5d3e5dbef9f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:18:48.746899954 +0000 UTC m=+1.816425521,LastTimestamp:2025-01-13 21:18:48.746899954 +0000 UTC m=+1.816425521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:18:48.766872 kubelet[2064]: I0113 21:18:48.766852 2064 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:18:48.766965 kubelet[2064]: I0113 21:18:48.766953 2064 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:18:48.767020 kubelet[2064]: I0113 21:18:48.767012 2064 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:18:48.768018 kubelet[2064]: I0113 21:18:48.767974 2064 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:18:48.769323 kubelet[2064]: I0113 21:18:48.769280 2064 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:18:48.769323 kubelet[2064]: I0113 21:18:48.769311 2064 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:18:48.769323 kubelet[2064]: I0113 21:18:48.769327 2064 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:18:48.769430 kubelet[2064]: E0113 21:18:48.769364 2064 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:18:48.770433 kubelet[2064]: W0113 21:18:48.769847 2064 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jan 13 21:18:48.770433 kubelet[2064]: E0113 21:18:48.769906 2064 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:18:48.853614 kubelet[2064]: E0113 21:18:48.853544 2064 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:18:48.869916 kubelet[2064]: E0113 21:18:48.869864 2064 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:18:48.873155 kubelet[2064]: I0113 21:18:48.873111 2064 policy_none.go:49] "None policy: Start" Jan 13 21:18:48.874042 kubelet[2064]: I0113 21:18:48.874020 2064 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:18:48.874113 kubelet[2064]: I0113 21:18:48.874051 2064 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:18:48.880518 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:18:48.898532 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:18:48.901521 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:18:48.908699 kubelet[2064]: I0113 21:18:48.908445 2064 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:18:48.908699 kubelet[2064]: I0113 21:18:48.908673 2064 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:18:48.908699 kubelet[2064]: I0113 21:18:48.908685 2064 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:18:48.909333 kubelet[2064]: I0113 21:18:48.908940 2064 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:18:48.910459 kubelet[2064]: E0113 21:18:48.910432 2064 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 21:18:48.955176 kubelet[2064]: E0113 21:18:48.955055 2064 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="400ms" Jan 13 21:18:49.010214 kubelet[2064]: I0113 21:18:49.010185 2064 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:18:49.010650 kubelet[2064]: E0113 21:18:49.010620 2064 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jan 13 21:18:49.079048 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Jan 13 21:18:49.102266 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Jan 13 21:18:49.116992 systemd[1]: Created slice kubepods-burstable-pod1e7033b7b56c4708631f7af5fcb94253.slice - libcontainer container kubepods-burstable-pod1e7033b7b56c4708631f7af5fcb94253.slice. Jan 13 21:18:49.155138 kubelet[2064]: I0113 21:18:49.155070 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e7033b7b56c4708631f7af5fcb94253-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e7033b7b56c4708631f7af5fcb94253\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:18:49.155138 kubelet[2064]: I0113 21:18:49.155114 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e7033b7b56c4708631f7af5fcb94253-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e7033b7b56c4708631f7af5fcb94253\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:18:49.155138 kubelet[2064]: I0113 21:18:49.155141 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:18:49.155265 kubelet[2064]: I0113 21:18:49.155156 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:18:49.155265 kubelet[2064]: I0113 21:18:49.155172 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:18:49.155265 kubelet[2064]: I0113 21:18:49.155190 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:18:49.155265 kubelet[2064]: I0113 21:18:49.155216 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e7033b7b56c4708631f7af5fcb94253-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1e7033b7b56c4708631f7af5fcb94253\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:18:49.155265 kubelet[2064]: I0113 21:18:49.155232 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:18:49.155384 kubelet[2064]: I0113 21:18:49.155246 2064 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:18:49.216280 kubelet[2064]: I0113 21:18:49.216084 2064 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:18:49.217280 kubelet[2064]: E0113 21:18:49.217186 2064 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jan 13 21:18:49.356575 kubelet[2064]: E0113 21:18:49.356519 2064 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="800ms" Jan 13 21:18:49.402412 kubelet[2064]: E0113 21:18:49.402195 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:49.403066 containerd[1438]: time="2025-01-13T21:18:49.402811834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Jan 13 21:18:49.416015 kubelet[2064]: E0113 21:18:49.415791 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:49.417259 containerd[1438]: time="2025-01-13T21:18:49.417204434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Jan 13 21:18:49.419649 kubelet[2064]: E0113 21:18:49.419540 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:49.420122 containerd[1438]: time="2025-01-13T21:18:49.419886834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1e7033b7b56c4708631f7af5fcb94253,Namespace:kube-system,Attempt:0,}" Jan 13 21:18:49.561978 kubelet[2064]: W0113 21:18:49.561834 2064 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jan 13 21:18:49.561978 kubelet[2064]: E0113 21:18:49.561902 2064 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:18:49.618619 kubelet[2064]: I0113 21:18:49.618365 2064 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:18:49.618801 kubelet[2064]: E0113 21:18:49.618757 2064 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jan 13 21:18:49.686173 kubelet[2064]: W0113 21:18:49.686102 2064 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jan 13 21:18:49.686173 kubelet[2064]: E0113 21:18:49.686170 2064 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:18:50.050664 kubelet[2064]: W0113 21:18:50.050513 2064 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jan 13 21:18:50.050664 kubelet[2064]: E0113 21:18:50.050585 2064 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:18:50.136930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2167178647.mount: Deactivated successfully. Jan 13 21:18:50.143084 containerd[1438]: time="2025-01-13T21:18:50.142966794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:18:50.145831 containerd[1438]: time="2025-01-13T21:18:50.145514314Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:18:50.146878 containerd[1438]: time="2025-01-13T21:18:50.146726074Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 13 21:18:50.148095 containerd[1438]: time="2025-01-13T21:18:50.147409794Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:18:50.148095 containerd[1438]: time="2025-01-13T21:18:50.148051114Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:18:50.149729 containerd[1438]: time="2025-01-13T21:18:50.149319434Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:18:50.149729 containerd[1438]: time="2025-01-13T21:18:50.149425594Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:18:50.152496 containerd[1438]: time="2025-01-13T21:18:50.152442594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:18:50.155209 containerd[1438]: time="2025-01-13T21:18:50.155175394Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 752.2834ms" Jan 13 21:18:50.157322 kubelet[2064]: E0113 21:18:50.157280 2064 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="1.6s" Jan 13 21:18:50.158844 containerd[1438]: time="2025-01-13T21:18:50.158695634Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 741.42708ms" Jan 13 21:18:50.159462 containerd[1438]: time="2025-01-13T21:18:50.159421514Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 739.46972ms" Jan 13 21:18:50.231470 kubelet[2064]: W0113 21:18:50.231344 2064 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jan 13 21:18:50.231470 kubelet[2064]: E0113 21:18:50.231390 2064 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:18:50.277236 containerd[1438]: time="2025-01-13T21:18:50.276762474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:18:50.277236 containerd[1438]: time="2025-01-13T21:18:50.276813354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:18:50.277236 containerd[1438]: time="2025-01-13T21:18:50.276824234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:18:50.277236 containerd[1438]: time="2025-01-13T21:18:50.276905914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:18:50.277732 containerd[1438]: time="2025-01-13T21:18:50.277471754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:18:50.277732 containerd[1438]: time="2025-01-13T21:18:50.277574714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:18:50.277732 containerd[1438]: time="2025-01-13T21:18:50.277590394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:18:50.278376 containerd[1438]: time="2025-01-13T21:18:50.278170674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:18:50.278376 containerd[1438]: time="2025-01-13T21:18:50.278211234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:18:50.278376 containerd[1438]: time="2025-01-13T21:18:50.278222314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:18:50.278376 containerd[1438]: time="2025-01-13T21:18:50.278286394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:18:50.278376 containerd[1438]: time="2025-01-13T21:18:50.278161234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:18:50.297715 systemd[1]: Started cri-containerd-3669785b9d6cad79a0113ea6d453b4eea0dbdac6a1360c425aaabc44efecb6b5.scope - libcontainer container 3669785b9d6cad79a0113ea6d453b4eea0dbdac6a1360c425aaabc44efecb6b5. Jan 13 21:18:50.299067 systemd[1]: Started cri-containerd-a554ec717beee34251a43239038d131e4ec79dc6b4c8a9def23176fe1d1d9d33.scope - libcontainer container a554ec717beee34251a43239038d131e4ec79dc6b4c8a9def23176fe1d1d9d33. Jan 13 21:18:50.302895 systemd[1]: Started cri-containerd-546668f05d6da78ffee18b5b3ea7a08c05cbbfe7094edf31d310ec2b00f7603a.scope - libcontainer container 546668f05d6da78ffee18b5b3ea7a08c05cbbfe7094edf31d310ec2b00f7603a. Jan 13 21:18:50.336942 containerd[1438]: time="2025-01-13T21:18:50.336871914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1e7033b7b56c4708631f7af5fcb94253,Namespace:kube-system,Attempt:0,} returns sandbox id \"3669785b9d6cad79a0113ea6d453b4eea0dbdac6a1360c425aaabc44efecb6b5\"" Jan 13 21:18:50.338477 containerd[1438]: time="2025-01-13T21:18:50.338449274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"a554ec717beee34251a43239038d131e4ec79dc6b4c8a9def23176fe1d1d9d33\"" Jan 13 21:18:50.338614 kubelet[2064]: E0113 21:18:50.338517 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:50.339382 kubelet[2064]: E0113 21:18:50.339353 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:50.341389 containerd[1438]: time="2025-01-13T21:18:50.341288274Z" level=info msg="CreateContainer within sandbox \"3669785b9d6cad79a0113ea6d453b4eea0dbdac6a1360c425aaabc44efecb6b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:18:50.342037 containerd[1438]: time="2025-01-13T21:18:50.341992714Z" level=info msg="CreateContainer within sandbox \"a554ec717beee34251a43239038d131e4ec79dc6b4c8a9def23176fe1d1d9d33\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:18:50.345119 containerd[1438]: time="2025-01-13T21:18:50.345040154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"546668f05d6da78ffee18b5b3ea7a08c05cbbfe7094edf31d310ec2b00f7603a\"" Jan 13 21:18:50.345715 kubelet[2064]: E0113 21:18:50.345692 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:50.347210 containerd[1438]: time="2025-01-13T21:18:50.347179594Z" level=info msg="CreateContainer within sandbox \"546668f05d6da78ffee18b5b3ea7a08c05cbbfe7094edf31d310ec2b00f7603a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:18:50.360366 containerd[1438]: time="2025-01-13T21:18:50.360302994Z" level=info msg="CreateContainer within sandbox \"3669785b9d6cad79a0113ea6d453b4eea0dbdac6a1360c425aaabc44efecb6b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c6e9d447863bbd4300c8fbd3a5ba61212558e472983fb94c602c124377e9c57a\"" Jan 13 21:18:50.361267 containerd[1438]: time="2025-01-13T21:18:50.361214274Z" level=info msg="StartContainer for \"c6e9d447863bbd4300c8fbd3a5ba61212558e472983fb94c602c124377e9c57a\"" Jan 13 21:18:50.367953 containerd[1438]: time="2025-01-13T21:18:50.367916354Z" level=info msg="CreateContainer within sandbox \"a554ec717beee34251a43239038d131e4ec79dc6b4c8a9def23176fe1d1d9d33\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c32ac3fd7ca9cce9b2d518133bcd2958fddbf22f4eb63848873b912f8947ecd4\"" Jan 13 21:18:50.368389 containerd[1438]: time="2025-01-13T21:18:50.368362034Z" level=info msg="StartContainer for \"c32ac3fd7ca9cce9b2d518133bcd2958fddbf22f4eb63848873b912f8947ecd4\"" Jan 13 21:18:50.371148 containerd[1438]: time="2025-01-13T21:18:50.369668074Z" level=info msg="CreateContainer within sandbox \"546668f05d6da78ffee18b5b3ea7a08c05cbbfe7094edf31d310ec2b00f7603a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a68793c895903a043a53ad91d83434c808d9db0abbf90b56e426e2669038d746\"" Jan 13 21:18:50.371601 containerd[1438]: time="2025-01-13T21:18:50.371568634Z" level=info msg="StartContainer for \"a68793c895903a043a53ad91d83434c808d9db0abbf90b56e426e2669038d746\"" Jan 13 21:18:50.385657 systemd[1]: Started cri-containerd-c6e9d447863bbd4300c8fbd3a5ba61212558e472983fb94c602c124377e9c57a.scope - libcontainer container c6e9d447863bbd4300c8fbd3a5ba61212558e472983fb94c602c124377e9c57a. Jan 13 21:18:50.391628 systemd[1]: Started cri-containerd-c32ac3fd7ca9cce9b2d518133bcd2958fddbf22f4eb63848873b912f8947ecd4.scope - libcontainer container c32ac3fd7ca9cce9b2d518133bcd2958fddbf22f4eb63848873b912f8947ecd4. Jan 13 21:18:50.396300 systemd[1]: Started cri-containerd-a68793c895903a043a53ad91d83434c808d9db0abbf90b56e426e2669038d746.scope - libcontainer container a68793c895903a043a53ad91d83434c808d9db0abbf90b56e426e2669038d746. Jan 13 21:18:50.419653 kubelet[2064]: I0113 21:18:50.419617 2064 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:18:50.419985 kubelet[2064]: E0113 21:18:50.419954 2064 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jan 13 21:18:50.428369 containerd[1438]: time="2025-01-13T21:18:50.428318914Z" level=info msg="StartContainer for \"c32ac3fd7ca9cce9b2d518133bcd2958fddbf22f4eb63848873b912f8947ecd4\" returns successfully" Jan 13 21:18:50.438579 containerd[1438]: time="2025-01-13T21:18:50.438420314Z" level=info msg="StartContainer for \"c6e9d447863bbd4300c8fbd3a5ba61212558e472983fb94c602c124377e9c57a\" returns successfully" Jan 13 21:18:50.463793 containerd[1438]: time="2025-01-13T21:18:50.463679674Z" level=info msg="StartContainer for \"a68793c895903a043a53ad91d83434c808d9db0abbf90b56e426e2669038d746\" returns successfully" Jan 13 21:18:50.779256 kubelet[2064]: E0113 21:18:50.778599 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:50.779256 kubelet[2064]: E0113 21:18:50.778681 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:50.781330 kubelet[2064]: E0113 21:18:50.781302 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:51.781996 kubelet[2064]: E0113 21:18:51.781908 2064 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:52.023742 kubelet[2064]: I0113 21:18:52.023472 2064 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:18:52.108596 kubelet[2064]: E0113 21:18:52.107743 2064 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 21:18:52.295324 kubelet[2064]: I0113 21:18:52.295287 2064 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 21:18:52.295324 kubelet[2064]: E0113 21:18:52.295327 2064 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 13 21:18:52.304004 kubelet[2064]: E0113 21:18:52.303961 2064 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:18:52.404428 kubelet[2064]: E0113 21:18:52.404316 2064 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:18:52.505024 kubelet[2064]: E0113 21:18:52.504968 2064 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:18:52.605504 kubelet[2064]: E0113 21:18:52.605426 2064 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:18:52.706286 kubelet[2064]: E0113 21:18:52.706173 2064 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:18:52.806725 kubelet[2064]: E0113 21:18:52.806684 2064 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:18:52.907203 kubelet[2064]: E0113 21:18:52.907161 2064 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:18:53.007948 kubelet[2064]: E0113 21:18:53.007853 2064 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:18:53.108584 kubelet[2064]: E0113 21:18:53.108541 2064 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:18:53.209532 kubelet[2064]: E0113 21:18:53.209454 2064 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:18:53.746127 kubelet[2064]: I0113 21:18:53.746091 2064 apiserver.go:52] "Watching apiserver" Jan 13 21:18:53.753491 kubelet[2064]: I0113 21:18:53.753446 2064 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:18:54.387662 systemd[1]: Reloading requested from client PID 2343 ('systemctl') (unit session-5.scope)... Jan 13 21:18:54.387678 systemd[1]: Reloading... Jan 13 21:18:54.451529 zram_generator::config[2385]: No configuration found. Jan 13 21:18:54.534120 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:18:54.603727 systemd[1]: Reloading finished in 215 ms. Jan 13 21:18:54.634768 kubelet[2064]: I0113 21:18:54.634626 2064 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:18:54.634791 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:18:54.652306 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:18:54.653116 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:18:54.653176 systemd[1]: kubelet.service: Consumed 2.160s CPU time, 122.4M memory peak, 0B memory swap peak. Jan 13 21:18:54.668740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:18:54.761289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:18:54.766264 (kubelet)[2424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:18:54.818871 kubelet[2424]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:18:54.818871 kubelet[2424]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:18:54.818871 kubelet[2424]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:18:54.819215 kubelet[2424]: I0113 21:18:54.818914 2424 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:18:54.824383 kubelet[2424]: I0113 21:18:54.824336 2424 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:18:54.824383 kubelet[2424]: I0113 21:18:54.824372 2424 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:18:54.824623 kubelet[2424]: I0113 21:18:54.824592 2424 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:18:54.825923 kubelet[2424]: I0113 21:18:54.825895 2424 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:18:54.829666 kubelet[2424]: I0113 21:18:54.829108 2424 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:18:54.834338 kubelet[2424]: E0113 21:18:54.834304 2424 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:18:54.834338 kubelet[2424]: I0113 21:18:54.834334 2424 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:18:54.836763 kubelet[2424]: I0113 21:18:54.836739 2424 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:18:54.836893 kubelet[2424]: I0113 21:18:54.836879 2424 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:18:54.837015 kubelet[2424]: I0113 21:18:54.836987 2424 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:18:54.837170 kubelet[2424]: I0113 21:18:54.837018 2424 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:18:54.837237 kubelet[2424]: I0113 21:18:54.837182 2424 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:18:54.837237 kubelet[2424]: I0113 21:18:54.837191 2424 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:18:54.837237 kubelet[2424]: I0113 21:18:54.837221 2424 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:18:54.837336 kubelet[2424]: I0113 21:18:54.837325 2424 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:18:54.837365 kubelet[2424]: I0113 21:18:54.837340 2424 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:18:54.837365 kubelet[2424]: I0113 21:18:54.837360 2424 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:18:54.837420 kubelet[2424]: I0113 21:18:54.837369 2424 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:18:54.838367 kubelet[2424]: I0113 21:18:54.838331 2424 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:18:54.838824 kubelet[2424]: I0113 21:18:54.838798 2424 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:18:54.839189 kubelet[2424]: I0113 21:18:54.839167 2424 server.go:1269] "Started kubelet" Jan 13 21:18:54.839652 kubelet[2424]: I0113 21:18:54.839605 2424 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:18:54.841099 kubelet[2424]: I0113 21:18:54.841072 2424 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:18:54.841251 kubelet[2424]: I0113 21:18:54.841212 2424 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:18:54.843284 kubelet[2424]: I0113 21:18:54.841332 2424 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:18:54.850903 kubelet[2424]: I0113 21:18:54.850771 2424 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:18:54.851936 kubelet[2424]: I0113 21:18:54.851903 2424 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:18:54.852328 kubelet[2424]: I0113 21:18:54.852304 2424 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:18:54.852516 kubelet[2424]: E0113 21:18:54.852460 2424 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:18:54.852831 kubelet[2424]: I0113 21:18:54.852803 2424 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:18:54.856700 kubelet[2424]: I0113 21:18:54.856670 2424 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:18:54.864518 kubelet[2424]: E0113 21:18:54.863683 2424 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:18:54.866916 kubelet[2424]: I0113 21:18:54.866647 2424 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:18:54.867745 kubelet[2424]: I0113 21:18:54.867721 2424 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:18:54.867745 kubelet[2424]: I0113 21:18:54.867739 2424 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:18:54.872055 kubelet[2424]: I0113 21:18:54.872020 2424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:18:54.873834 kubelet[2424]: I0113 21:18:54.873640 2424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:18:54.873834 kubelet[2424]: I0113 21:18:54.873669 2424 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:18:54.873834 kubelet[2424]: I0113 21:18:54.873684 2424 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:18:54.873834 kubelet[2424]: E0113 21:18:54.873723 2424 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:18:54.899692 kubelet[2424]: I0113 21:18:54.899663 2424 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:18:54.899692 kubelet[2424]: I0113 21:18:54.899680 2424 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:18:54.899692 kubelet[2424]: I0113 21:18:54.899697 2424 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:18:54.899840 kubelet[2424]: I0113 21:18:54.899827 2424 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:18:54.899864 kubelet[2424]: I0113 21:18:54.899837 2424 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:18:54.899864 kubelet[2424]: I0113 21:18:54.899853 2424 policy_none.go:49] "None policy: Start" Jan 13 21:18:54.900336 kubelet[2424]: I0113 21:18:54.900320 2424 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:18:54.900336 kubelet[2424]: I0113 21:18:54.900342 2424 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:18:54.900565 kubelet[2424]: I0113 21:18:54.900554 2424 state_mem.go:75] "Updated machine memory state" Jan 13 21:18:54.904170 kubelet[2424]: I0113 21:18:54.904096 2424 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:18:54.904266 kubelet[2424]: I0113 21:18:54.904239 2424 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:18:54.904301 kubelet[2424]: I0113 21:18:54.904257 2424 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:18:54.904888 kubelet[2424]: I0113 21:18:54.904700 2424 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:18:55.008158 kubelet[2424]: I0113 21:18:55.008122 2424 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 21:18:55.015014 kubelet[2424]: I0113 21:18:55.014981 2424 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 13 21:18:55.015014 kubelet[2424]: I0113 21:18:55.015066 2424 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 21:18:55.057355 kubelet[2424]: I0113 21:18:55.057317 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:18:55.057355 kubelet[2424]: I0113 21:18:55.057358 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:18:55.057535 kubelet[2424]: I0113 21:18:55.057376 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:18:55.057535 kubelet[2424]: I0113 21:18:55.057392 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:18:55.057535 kubelet[2424]: I0113 21:18:55.057418 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:18:55.057535 kubelet[2424]: I0113 21:18:55.057436 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e7033b7b56c4708631f7af5fcb94253-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e7033b7b56c4708631f7af5fcb94253\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:18:55.057535 kubelet[2424]: I0113 21:18:55.057450 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e7033b7b56c4708631f7af5fcb94253-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e7033b7b56c4708631f7af5fcb94253\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:18:55.057640 kubelet[2424]: I0113 21:18:55.057465 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e7033b7b56c4708631f7af5fcb94253-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1e7033b7b56c4708631f7af5fcb94253\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:18:55.057640 kubelet[2424]: I0113 21:18:55.057501 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:18:55.283008 kubelet[2424]: E0113 21:18:55.282739 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:55.283008 kubelet[2424]: E0113 21:18:55.282876 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:55.283008 kubelet[2424]: E0113 21:18:55.282887 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:55.837676 kubelet[2424]: I0113 21:18:55.837630 2424 apiserver.go:52] "Watching apiserver" Jan 13 21:18:55.858073 kubelet[2424]: I0113 21:18:55.858022 2424 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:18:55.883691 kubelet[2424]: E0113 21:18:55.883543 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:55.883790 kubelet[2424]: E0113 21:18:55.883780 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:55.892624 kubelet[2424]: E0113 21:18:55.892326 2424 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:18:55.892624 kubelet[2424]: E0113 21:18:55.892477 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:55.908496 kubelet[2424]: I0113 21:18:55.908307 2424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.908292791 podStartE2EDuration="1.908292791s" podCreationTimestamp="2025-01-13 21:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:18:55.908078271 +0000 UTC m=+1.138604878" watchObservedRunningTime="2025-01-13 21:18:55.908292791 +0000 UTC m=+1.138819438" Jan 13 21:18:55.915874 kubelet[2424]: I0113 21:18:55.915695 2424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.915682881 podStartE2EDuration="1.915682881s" podCreationTimestamp="2025-01-13 21:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:18:55.915595481 +0000 UTC m=+1.146122128" watchObservedRunningTime="2025-01-13 21:18:55.915682881 +0000 UTC m=+1.146209528" Jan 13 21:18:55.922013 kubelet[2424]: I0113 21:18:55.921685 2424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.921675809 podStartE2EDuration="1.921675809s" podCreationTimestamp="2025-01-13 21:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:18:55.921617529 +0000 UTC m=+1.152144176" watchObservedRunningTime="2025-01-13 21:18:55.921675809 +0000 UTC m=+1.152202456" Jan 13 21:18:56.220558 sudo[1575]: pam_unix(sudo:session): session closed for user root Jan 13 21:18:56.222321 sshd[1572]: pam_unix(sshd:session): session closed for user core Jan 13 21:18:56.225738 systemd[1]: sshd@4-10.0.0.74:22-10.0.0.1:38588.service: Deactivated successfully. Jan 13 21:18:56.227371 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:18:56.228740 systemd[1]: session-5.scope: Consumed 5.067s CPU time, 153.4M memory peak, 0B memory swap peak. Jan 13 21:18:56.229528 systemd-logind[1420]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:18:56.230866 systemd-logind[1420]: Removed session 5. Jan 13 21:18:56.885113 kubelet[2424]: E0113 21:18:56.885079 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:57.275659 kubelet[2424]: E0113 21:18:57.275556 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:58.064222 kubelet[2424]: E0113 21:18:58.064189 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:18:59.316040 kubelet[2424]: I0113 21:18:59.316008 2424 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:18:59.316454 containerd[1438]: time="2025-01-13T21:18:59.316338966Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:18:59.316725 kubelet[2424]: I0113 21:18:59.316606 2424 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:18:59.831345 systemd[1]: Created slice kubepods-burstable-pod8a615fbd_737d_4681_9e6a_21478caaf9ca.slice - libcontainer container kubepods-burstable-pod8a615fbd_737d_4681_9e6a_21478caaf9ca.slice. Jan 13 21:18:59.841340 systemd[1]: Created slice kubepods-besteffort-poda45504b2_0c33_43c6_a7d1_2b0d016231e2.slice - libcontainer container kubepods-besteffort-poda45504b2_0c33_43c6_a7d1_2b0d016231e2.slice. Jan 13 21:18:59.885708 kubelet[2424]: I0113 21:18:59.885662 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/8a615fbd-737d-4681-9e6a-21478caaf9ca-cni\") pod \"kube-flannel-ds-n57jr\" (UID: \"8a615fbd-737d-4681-9e6a-21478caaf9ca\") " pod="kube-flannel/kube-flannel-ds-n57jr" Jan 13 21:18:59.885708 kubelet[2424]: I0113 21:18:59.885707 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pqlp\" (UniqueName: \"kubernetes.io/projected/8a615fbd-737d-4681-9e6a-21478caaf9ca-kube-api-access-9pqlp\") pod \"kube-flannel-ds-n57jr\" (UID: \"8a615fbd-737d-4681-9e6a-21478caaf9ca\") " pod="kube-flannel/kube-flannel-ds-n57jr" Jan 13 21:18:59.885873 kubelet[2424]: I0113 21:18:59.885728 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t54t\" (UniqueName: \"kubernetes.io/projected/a45504b2-0c33-43c6-a7d1-2b0d016231e2-kube-api-access-6t54t\") pod \"kube-proxy-wtjgs\" (UID: \"a45504b2-0c33-43c6-a7d1-2b0d016231e2\") " pod="kube-system/kube-proxy-wtjgs" Jan 13 21:18:59.885873 kubelet[2424]: I0113 21:18:59.885745 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a45504b2-0c33-43c6-a7d1-2b0d016231e2-xtables-lock\") pod \"kube-proxy-wtjgs\" (UID: \"a45504b2-0c33-43c6-a7d1-2b0d016231e2\") " pod="kube-system/kube-proxy-wtjgs" Jan 13 21:18:59.885873 kubelet[2424]: I0113 21:18:59.885763 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/8a615fbd-737d-4681-9e6a-21478caaf9ca-cni-plugin\") pod \"kube-flannel-ds-n57jr\" (UID: \"8a615fbd-737d-4681-9e6a-21478caaf9ca\") " pod="kube-flannel/kube-flannel-ds-n57jr" Jan 13 21:18:59.885873 kubelet[2424]: I0113 21:18:59.885781 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/8a615fbd-737d-4681-9e6a-21478caaf9ca-flannel-cfg\") pod \"kube-flannel-ds-n57jr\" (UID: \"8a615fbd-737d-4681-9e6a-21478caaf9ca\") " pod="kube-flannel/kube-flannel-ds-n57jr" Jan 13 21:18:59.885873 kubelet[2424]: I0113 21:18:59.885796 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a45504b2-0c33-43c6-a7d1-2b0d016231e2-lib-modules\") pod \"kube-proxy-wtjgs\" (UID: \"a45504b2-0c33-43c6-a7d1-2b0d016231e2\") " pod="kube-system/kube-proxy-wtjgs" Jan 13 21:18:59.885982 kubelet[2424]: I0113 21:18:59.885810 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8a615fbd-737d-4681-9e6a-21478caaf9ca-run\") pod \"kube-flannel-ds-n57jr\" (UID: \"8a615fbd-737d-4681-9e6a-21478caaf9ca\") " pod="kube-flannel/kube-flannel-ds-n57jr" Jan 13 21:18:59.885982 kubelet[2424]: I0113 21:18:59.885826 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a615fbd-737d-4681-9e6a-21478caaf9ca-xtables-lock\") pod \"kube-flannel-ds-n57jr\" (UID: \"8a615fbd-737d-4681-9e6a-21478caaf9ca\") " pod="kube-flannel/kube-flannel-ds-n57jr" Jan 13 21:18:59.885982 kubelet[2424]: I0113 21:18:59.885841 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a45504b2-0c33-43c6-a7d1-2b0d016231e2-kube-proxy\") pod \"kube-proxy-wtjgs\" (UID: \"a45504b2-0c33-43c6-a7d1-2b0d016231e2\") " pod="kube-system/kube-proxy-wtjgs" Jan 13 21:18:59.993872 kubelet[2424]: E0113 21:18:59.993826 2424 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 13 21:18:59.993872 kubelet[2424]: E0113 21:18:59.993860 2424 projected.go:194] Error preparing data for projected volume kube-api-access-6t54t for pod kube-system/kube-proxy-wtjgs: configmap "kube-root-ca.crt" not found Jan 13 21:18:59.994074 kubelet[2424]: E0113 21:18:59.993920 2424 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a45504b2-0c33-43c6-a7d1-2b0d016231e2-kube-api-access-6t54t podName:a45504b2-0c33-43c6-a7d1-2b0d016231e2 nodeName:}" failed. No retries permitted until 2025-01-13 21:19:00.493901272 +0000 UTC m=+5.724427919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6t54t" (UniqueName: "kubernetes.io/projected/a45504b2-0c33-43c6-a7d1-2b0d016231e2-kube-api-access-6t54t") pod "kube-proxy-wtjgs" (UID: "a45504b2-0c33-43c6-a7d1-2b0d016231e2") : configmap "kube-root-ca.crt" not found Jan 13 21:18:59.996175 kubelet[2424]: E0113 21:18:59.996134 2424 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 13 21:18:59.996175 kubelet[2424]: E0113 21:18:59.996161 2424 projected.go:194] Error preparing data for projected volume kube-api-access-9pqlp for pod kube-flannel/kube-flannel-ds-n57jr: configmap "kube-root-ca.crt" not found Jan 13 21:18:59.996326 kubelet[2424]: E0113 21:18:59.996212 2424 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8a615fbd-737d-4681-9e6a-21478caaf9ca-kube-api-access-9pqlp podName:8a615fbd-737d-4681-9e6a-21478caaf9ca nodeName:}" failed. No retries permitted until 2025-01-13 21:19:00.496198355 +0000 UTC m=+5.726725002 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9pqlp" (UniqueName: "kubernetes.io/projected/8a615fbd-737d-4681-9e6a-21478caaf9ca-kube-api-access-9pqlp") pod "kube-flannel-ds-n57jr" (UID: "8a615fbd-737d-4681-9e6a-21478caaf9ca") : configmap "kube-root-ca.crt" not found Jan 13 21:19:00.735187 kubelet[2424]: E0113 21:19:00.735146 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:00.735853 containerd[1438]: time="2025-01-13T21:19:00.735815518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-n57jr,Uid:8a615fbd-737d-4681-9e6a-21478caaf9ca,Namespace:kube-flannel,Attempt:0,}" Jan 13 21:19:00.755732 containerd[1438]: time="2025-01-13T21:19:00.755607937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:00.755732 containerd[1438]: time="2025-01-13T21:19:00.755664337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:00.755732 containerd[1438]: time="2025-01-13T21:19:00.755686417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:00.755938 containerd[1438]: time="2025-01-13T21:19:00.755781297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:00.756409 kubelet[2424]: E0113 21:19:00.756336 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:00.757867 containerd[1438]: time="2025-01-13T21:19:00.757551099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wtjgs,Uid:a45504b2-0c33-43c6-a7d1-2b0d016231e2,Namespace:kube-system,Attempt:0,}" Jan 13 21:19:00.776704 systemd[1]: Started cri-containerd-ab882c3a016bc36f2b8c9c4c77874ae309b87f4d51d449688fbbdfefcc10edb8.scope - libcontainer container ab882c3a016bc36f2b8c9c4c77874ae309b87f4d51d449688fbbdfefcc10edb8. Jan 13 21:19:00.790954 containerd[1438]: time="2025-01-13T21:19:00.790848211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:00.790954 containerd[1438]: time="2025-01-13T21:19:00.790916611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:00.790954 containerd[1438]: time="2025-01-13T21:19:00.790928691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:00.791120 containerd[1438]: time="2025-01-13T21:19:00.791041212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:00.808727 systemd[1]: Started cri-containerd-ed322b8246849972ba030ad4f502a8143332603a79bc35f37f9f4dc2d7bd683b.scope - libcontainer container ed322b8246849972ba030ad4f502a8143332603a79bc35f37f9f4dc2d7bd683b. Jan 13 21:19:00.809289 containerd[1438]: time="2025-01-13T21:19:00.808988069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-n57jr,Uid:8a615fbd-737d-4681-9e6a-21478caaf9ca,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"ab882c3a016bc36f2b8c9c4c77874ae309b87f4d51d449688fbbdfefcc10edb8\"" Jan 13 21:19:00.810422 kubelet[2424]: E0113 21:19:00.810397 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:00.812249 containerd[1438]: time="2025-01-13T21:19:00.812195512Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 13 21:19:00.831044 containerd[1438]: time="2025-01-13T21:19:00.831006931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wtjgs,Uid:a45504b2-0c33-43c6-a7d1-2b0d016231e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed322b8246849972ba030ad4f502a8143332603a79bc35f37f9f4dc2d7bd683b\"" Jan 13 21:19:00.831791 kubelet[2424]: E0113 21:19:00.831768 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:00.833278 containerd[1438]: time="2025-01-13T21:19:00.833250173Z" level=info msg="CreateContainer within sandbox \"ed322b8246849972ba030ad4f502a8143332603a79bc35f37f9f4dc2d7bd683b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:19:00.843897 containerd[1438]: time="2025-01-13T21:19:00.843847903Z" level=info msg="CreateContainer within sandbox \"ed322b8246849972ba030ad4f502a8143332603a79bc35f37f9f4dc2d7bd683b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b73b971787f8ecfc0f1cfa73af466ab841b2e55384f8d46e8fa874f03ad929f6\"" Jan 13 21:19:00.845582 containerd[1438]: time="2025-01-13T21:19:00.844615064Z" level=info msg="StartContainer for \"b73b971787f8ecfc0f1cfa73af466ab841b2e55384f8d46e8fa874f03ad929f6\"" Jan 13 21:19:00.869646 systemd[1]: Started cri-containerd-b73b971787f8ecfc0f1cfa73af466ab841b2e55384f8d46e8fa874f03ad929f6.scope - libcontainer container b73b971787f8ecfc0f1cfa73af466ab841b2e55384f8d46e8fa874f03ad929f6. Jan 13 21:19:00.898662 containerd[1438]: time="2025-01-13T21:19:00.898609077Z" level=info msg="StartContainer for \"b73b971787f8ecfc0f1cfa73af466ab841b2e55384f8d46e8fa874f03ad929f6\" returns successfully" Jan 13 21:19:01.905912 kubelet[2424]: E0113 21:19:01.905802 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:01.918079 kubelet[2424]: I0113 21:19:01.918023 2424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wtjgs" podStartSLOduration=2.918008097 podStartE2EDuration="2.918008097s" podCreationTimestamp="2025-01-13 21:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:19:01.917609336 +0000 UTC m=+7.148135983" watchObservedRunningTime="2025-01-13 21:19:01.918008097 +0000 UTC m=+7.148534744" Jan 13 21:19:01.928454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2492562582.mount: Deactivated successfully. Jan 13 21:19:01.954339 containerd[1438]: time="2025-01-13T21:19:01.954286930Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:01.954775 containerd[1438]: time="2025-01-13T21:19:01.954743050Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673531" Jan 13 21:19:01.955563 containerd[1438]: time="2025-01-13T21:19:01.955526691Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:01.957687 containerd[1438]: time="2025-01-13T21:19:01.957637253Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:01.958580 containerd[1438]: time="2025-01-13T21:19:01.958540814Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.146190862s" Jan 13 21:19:01.958621 containerd[1438]: time="2025-01-13T21:19:01.958580334Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Jan 13 21:19:01.960355 containerd[1438]: time="2025-01-13T21:19:01.960323415Z" level=info msg="CreateContainer within sandbox \"ab882c3a016bc36f2b8c9c4c77874ae309b87f4d51d449688fbbdfefcc10edb8\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 13 21:19:01.969357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1950527638.mount: Deactivated successfully. Jan 13 21:19:01.969908 containerd[1438]: time="2025-01-13T21:19:01.969587944Z" level=info msg="CreateContainer within sandbox \"ab882c3a016bc36f2b8c9c4c77874ae309b87f4d51d449688fbbdfefcc10edb8\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"cc1064bf04fe28ae569c0544910178e01c07e21fad55c5b96026857e4844b38b\"" Jan 13 21:19:01.970065 containerd[1438]: time="2025-01-13T21:19:01.970029904Z" level=info msg="StartContainer for \"cc1064bf04fe28ae569c0544910178e01c07e21fad55c5b96026857e4844b38b\"" Jan 13 21:19:01.993657 systemd[1]: Started cri-containerd-cc1064bf04fe28ae569c0544910178e01c07e21fad55c5b96026857e4844b38b.scope - libcontainer container cc1064bf04fe28ae569c0544910178e01c07e21fad55c5b96026857e4844b38b. Jan 13 21:19:02.015287 containerd[1438]: time="2025-01-13T21:19:02.015228625Z" level=info msg="StartContainer for \"cc1064bf04fe28ae569c0544910178e01c07e21fad55c5b96026857e4844b38b\" returns successfully" Jan 13 21:19:02.016614 systemd[1]: cri-containerd-cc1064bf04fe28ae569c0544910178e01c07e21fad55c5b96026857e4844b38b.scope: Deactivated successfully. Jan 13 21:19:02.059416 containerd[1438]: time="2025-01-13T21:19:02.059349623Z" level=info msg="shim disconnected" id=cc1064bf04fe28ae569c0544910178e01c07e21fad55c5b96026857e4844b38b namespace=k8s.io Jan 13 21:19:02.059416 containerd[1438]: time="2025-01-13T21:19:02.059403943Z" level=warning msg="cleaning up after shim disconnected" id=cc1064bf04fe28ae569c0544910178e01c07e21fad55c5b96026857e4844b38b namespace=k8s.io Jan 13 21:19:02.059416 containerd[1438]: time="2025-01-13T21:19:02.059412103Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:19:02.908355 kubelet[2424]: E0113 21:19:02.908316 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:02.909140 kubelet[2424]: E0113 21:19:02.908697 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:02.909741 containerd[1438]: time="2025-01-13T21:19:02.909696433Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 13 21:19:04.025188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3284716954.mount: Deactivated successfully. Jan 13 21:19:04.475167 containerd[1438]: time="2025-01-13T21:19:04.475039554Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:04.476141 containerd[1438]: time="2025-01-13T21:19:04.475997394Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874260" Jan 13 21:19:04.476763 containerd[1438]: time="2025-01-13T21:19:04.476707715Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:04.479882 containerd[1438]: time="2025-01-13T21:19:04.479844517Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:19:04.481280 containerd[1438]: time="2025-01-13T21:19:04.481234198Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 1.571497405s" Jan 13 21:19:04.481329 containerd[1438]: time="2025-01-13T21:19:04.481281038Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Jan 13 21:19:04.484052 containerd[1438]: time="2025-01-13T21:19:04.483926400Z" level=info msg="CreateContainer within sandbox \"ab882c3a016bc36f2b8c9c4c77874ae309b87f4d51d449688fbbdfefcc10edb8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:19:04.492555 containerd[1438]: time="2025-01-13T21:19:04.492452207Z" level=info msg="CreateContainer within sandbox \"ab882c3a016bc36f2b8c9c4c77874ae309b87f4d51d449688fbbdfefcc10edb8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bf0f105d16ffb8dbb604808543fa2f3c85f73417f447a81bd5e10e31fe9e84b9\"" Jan 13 21:19:04.492996 containerd[1438]: time="2025-01-13T21:19:04.492968607Z" level=info msg="StartContainer for \"bf0f105d16ffb8dbb604808543fa2f3c85f73417f447a81bd5e10e31fe9e84b9\"" Jan 13 21:19:04.530686 systemd[1]: Started cri-containerd-bf0f105d16ffb8dbb604808543fa2f3c85f73417f447a81bd5e10e31fe9e84b9.scope - libcontainer container bf0f105d16ffb8dbb604808543fa2f3c85f73417f447a81bd5e10e31fe9e84b9. Jan 13 21:19:04.559979 systemd[1]: cri-containerd-bf0f105d16ffb8dbb604808543fa2f3c85f73417f447a81bd5e10e31fe9e84b9.scope: Deactivated successfully. Jan 13 21:19:04.612405 containerd[1438]: time="2025-01-13T21:19:04.612277977Z" level=info msg="StartContainer for \"bf0f105d16ffb8dbb604808543fa2f3c85f73417f447a81bd5e10e31fe9e84b9\" returns successfully" Jan 13 21:19:04.633902 kubelet[2424]: I0113 21:19:04.633811 2424 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 21:19:04.635056 containerd[1438]: time="2025-01-13T21:19:04.634711314Z" level=info msg="shim disconnected" id=bf0f105d16ffb8dbb604808543fa2f3c85f73417f447a81bd5e10e31fe9e84b9 namespace=k8s.io Jan 13 21:19:04.635056 containerd[1438]: time="2025-01-13T21:19:04.634761634Z" level=warning msg="cleaning up after shim disconnected" id=bf0f105d16ffb8dbb604808543fa2f3c85f73417f447a81bd5e10e31fe9e84b9 namespace=k8s.io Jan 13 21:19:04.635056 containerd[1438]: time="2025-01-13T21:19:04.634771434Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:19:04.663422 systemd[1]: Created slice kubepods-burstable-pod6c44d2b0_957a_4f1f_8c9d_fe9bef4f2f65.slice - libcontainer container kubepods-burstable-pod6c44d2b0_957a_4f1f_8c9d_fe9bef4f2f65.slice. Jan 13 21:19:04.670174 systemd[1]: Created slice kubepods-burstable-pod571933e3_16f0_4e82_9867_e2d4431d18ed.slice - libcontainer container kubepods-burstable-pod571933e3_16f0_4e82_9867_e2d4431d18ed.slice. Jan 13 21:19:04.721500 kubelet[2424]: I0113 21:19:04.721386 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c44d2b0-957a-4f1f-8c9d-fe9bef4f2f65-config-volume\") pod \"coredns-6f6b679f8f-4v788\" (UID: \"6c44d2b0-957a-4f1f-8c9d-fe9bef4f2f65\") " pod="kube-system/coredns-6f6b679f8f-4v788" Jan 13 21:19:04.721500 kubelet[2424]: I0113 21:19:04.721436 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nlkq\" (UniqueName: \"kubernetes.io/projected/6c44d2b0-957a-4f1f-8c9d-fe9bef4f2f65-kube-api-access-4nlkq\") pod \"coredns-6f6b679f8f-4v788\" (UID: \"6c44d2b0-957a-4f1f-8c9d-fe9bef4f2f65\") " pod="kube-system/coredns-6f6b679f8f-4v788" Jan 13 21:19:04.721500 kubelet[2424]: I0113 21:19:04.721460 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/571933e3-16f0-4e82-9867-e2d4431d18ed-config-volume\") pod \"coredns-6f6b679f8f-xjpvx\" (UID: \"571933e3-16f0-4e82-9867-e2d4431d18ed\") " pod="kube-system/coredns-6f6b679f8f-xjpvx" Jan 13 21:19:04.721500 kubelet[2424]: I0113 21:19:04.721496 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh68t\" (UniqueName: \"kubernetes.io/projected/571933e3-16f0-4e82-9867-e2d4431d18ed-kube-api-access-sh68t\") pod \"coredns-6f6b679f8f-xjpvx\" (UID: \"571933e3-16f0-4e82-9867-e2d4431d18ed\") " pod="kube-system/coredns-6f6b679f8f-xjpvx" Jan 13 21:19:04.912921 kubelet[2424]: E0113 21:19:04.912810 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:04.917512 containerd[1438]: time="2025-01-13T21:19:04.917421808Z" level=info msg="CreateContainer within sandbox \"ab882c3a016bc36f2b8c9c4c77874ae309b87f4d51d449688fbbdfefcc10edb8\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 13 21:19:04.929158 containerd[1438]: time="2025-01-13T21:19:04.929098296Z" level=info msg="CreateContainer within sandbox \"ab882c3a016bc36f2b8c9c4c77874ae309b87f4d51d449688fbbdfefcc10edb8\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e05a2fd8de8b1e4bb6ce303ca90c13fcb96c84854ac63be7bb0e497dd84cecec\"" Jan 13 21:19:04.929710 containerd[1438]: time="2025-01-13T21:19:04.929683497Z" level=info msg="StartContainer for \"e05a2fd8de8b1e4bb6ce303ca90c13fcb96c84854ac63be7bb0e497dd84cecec\"" Jan 13 21:19:04.952669 systemd[1]: Started cri-containerd-e05a2fd8de8b1e4bb6ce303ca90c13fcb96c84854ac63be7bb0e497dd84cecec.scope - libcontainer container e05a2fd8de8b1e4bb6ce303ca90c13fcb96c84854ac63be7bb0e497dd84cecec. Jan 13 21:19:04.960933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf0f105d16ffb8dbb604808543fa2f3c85f73417f447a81bd5e10e31fe9e84b9-rootfs.mount: Deactivated successfully. Jan 13 21:19:04.967759 kubelet[2424]: E0113 21:19:04.967714 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:04.969629 containerd[1438]: time="2025-01-13T21:19:04.969543327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4v788,Uid:6c44d2b0-957a-4f1f-8c9d-fe9bef4f2f65,Namespace:kube-system,Attempt:0,}" Jan 13 21:19:04.974231 kubelet[2424]: E0113 21:19:04.974203 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:04.975245 containerd[1438]: time="2025-01-13T21:19:04.975209811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xjpvx,Uid:571933e3-16f0-4e82-9867-e2d4431d18ed,Namespace:kube-system,Attempt:0,}" Jan 13 21:19:04.985316 containerd[1438]: time="2025-01-13T21:19:04.985262499Z" level=info msg="StartContainer for \"e05a2fd8de8b1e4bb6ce303ca90c13fcb96c84854ac63be7bb0e497dd84cecec\" returns successfully" Jan 13 21:19:05.075950 containerd[1438]: time="2025-01-13T21:19:05.073571362Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4v788,Uid:6c44d2b0-957a-4f1f-8c9d-fe9bef4f2f65,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"74a763e2791c77f66c528e2dcda25abdc30f40435d453dc55f2075ac88fe36e4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:19:05.076141 kubelet[2424]: E0113 21:19:05.073820 2424 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74a763e2791c77f66c528e2dcda25abdc30f40435d453dc55f2075ac88fe36e4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:19:05.076141 kubelet[2424]: E0113 21:19:05.073889 2424 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74a763e2791c77f66c528e2dcda25abdc30f40435d453dc55f2075ac88fe36e4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-4v788" Jan 13 21:19:05.076809 kubelet[2424]: E0113 21:19:05.076760 2424 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74a763e2791c77f66c528e2dcda25abdc30f40435d453dc55f2075ac88fe36e4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-4v788" Jan 13 21:19:05.076899 kubelet[2424]: E0113 21:19:05.076838 2424 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-4v788_kube-system(6c44d2b0-957a-4f1f-8c9d-fe9bef4f2f65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-4v788_kube-system(6c44d2b0-957a-4f1f-8c9d-fe9bef4f2f65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74a763e2791c77f66c528e2dcda25abdc30f40435d453dc55f2075ac88fe36e4\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-4v788" podUID="6c44d2b0-957a-4f1f-8c9d-fe9bef4f2f65" Jan 13 21:19:05.085378 containerd[1438]: time="2025-01-13T21:19:05.083474089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xjpvx,Uid:571933e3-16f0-4e82-9867-e2d4431d18ed,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c55ec6fb38e3ecf272f45a3534cb0e1b653304455b4f1cc807d3177b3dfaf67\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:19:05.085471 kubelet[2424]: E0113 21:19:05.083677 2424 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c55ec6fb38e3ecf272f45a3534cb0e1b653304455b4f1cc807d3177b3dfaf67\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:19:05.085471 kubelet[2424]: E0113 21:19:05.083737 2424 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c55ec6fb38e3ecf272f45a3534cb0e1b653304455b4f1cc807d3177b3dfaf67\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-xjpvx" Jan 13 21:19:05.085471 kubelet[2424]: E0113 21:19:05.083753 2424 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c55ec6fb38e3ecf272f45a3534cb0e1b653304455b4f1cc807d3177b3dfaf67\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-xjpvx" Jan 13 21:19:05.085471 kubelet[2424]: E0113 21:19:05.083794 2424 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xjpvx_kube-system(571933e3-16f0-4e82-9867-e2d4431d18ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xjpvx_kube-system(571933e3-16f0-4e82-9867-e2d4431d18ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c55ec6fb38e3ecf272f45a3534cb0e1b653304455b4f1cc807d3177b3dfaf67\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-xjpvx" podUID="571933e3-16f0-4e82-9867-e2d4431d18ed" Jan 13 21:19:05.155886 kubelet[2424]: E0113 21:19:05.155855 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:05.611625 update_engine[1427]: I20250113 21:19:05.611531 1427 update_attempter.cc:509] Updating boot flags... Jan 13 21:19:05.631604 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2992) Jan 13 21:19:05.657562 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2990) Jan 13 21:19:05.684416 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2990) Jan 13 21:19:05.916858 kubelet[2424]: E0113 21:19:05.916449 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:05.916858 kubelet[2424]: E0113 21:19:05.916580 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:05.936620 kubelet[2424]: I0113 21:19:05.936569 2424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-n57jr" podStartSLOduration=3.265535405 podStartE2EDuration="6.936551053s" podCreationTimestamp="2025-01-13 21:18:59 +0000 UTC" firstStartedPulling="2025-01-13 21:19:00.811408111 +0000 UTC m=+6.041934758" lastFinishedPulling="2025-01-13 21:19:04.482423759 +0000 UTC m=+9.712950406" observedRunningTime="2025-01-13 21:19:05.928235327 +0000 UTC m=+11.158761974" watchObservedRunningTime="2025-01-13 21:19:05.936551053 +0000 UTC m=+11.167077700" Jan 13 21:19:05.957950 systemd[1]: run-netns-cni\x2d15c89efc\x2ddb69\x2d3753\x2de033\x2da94416550eb1.mount: Deactivated successfully. Jan 13 21:19:05.958047 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c55ec6fb38e3ecf272f45a3534cb0e1b653304455b4f1cc807d3177b3dfaf67-shm.mount: Deactivated successfully. Jan 13 21:19:05.958103 systemd[1]: run-netns-cni\x2dfcc69e2f\x2d4001\x2defab\x2d5c12\x2dc2553f55eeb8.mount: Deactivated successfully. Jan 13 21:19:05.958151 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74a763e2791c77f66c528e2dcda25abdc30f40435d453dc55f2075ac88fe36e4-shm.mount: Deactivated successfully. Jan 13 21:19:06.124876 systemd-networkd[1376]: flannel.1: Link UP Jan 13 21:19:06.124888 systemd-networkd[1376]: flannel.1: Gained carrier Jan 13 21:19:06.917919 kubelet[2424]: E0113 21:19:06.917879 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:07.283529 kubelet[2424]: E0113 21:19:07.282605 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:07.977929 systemd-networkd[1376]: flannel.1: Gained IPv6LL Jan 13 21:19:08.071373 kubelet[2424]: E0113 21:19:08.071307 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:15.875009 kubelet[2424]: E0113 21:19:15.874877 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:15.875382 containerd[1438]: time="2025-01-13T21:19:15.875268059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4v788,Uid:6c44d2b0-957a-4f1f-8c9d-fe9bef4f2f65,Namespace:kube-system,Attempt:0,}" Jan 13 21:19:15.907938 systemd-networkd[1376]: cni0: Link UP Jan 13 21:19:15.907944 systemd-networkd[1376]: cni0: Gained carrier Jan 13 21:19:15.910758 systemd-networkd[1376]: cni0: Lost carrier Jan 13 21:19:15.914071 systemd-networkd[1376]: veth1ef7ed09: Link UP Jan 13 21:19:15.917539 kernel: cni0: port 1(veth1ef7ed09) entered blocking state Jan 13 21:19:15.917602 kernel: cni0: port 1(veth1ef7ed09) entered disabled state Jan 13 21:19:15.917619 kernel: veth1ef7ed09: entered allmulticast mode Jan 13 21:19:15.917633 kernel: veth1ef7ed09: entered promiscuous mode Jan 13 21:19:15.923098 kernel: cni0: port 1(veth1ef7ed09) entered blocking state Jan 13 21:19:15.923147 kernel: cni0: port 1(veth1ef7ed09) entered forwarding state Jan 13 21:19:15.925506 kernel: cni0: port 1(veth1ef7ed09) entered disabled state Jan 13 21:19:15.933514 kernel: cni0: port 1(veth1ef7ed09) entered blocking state Jan 13 21:19:15.933559 kernel: cni0: port 1(veth1ef7ed09) entered forwarding state Jan 13 21:19:15.933643 systemd-networkd[1376]: veth1ef7ed09: Gained carrier Jan 13 21:19:15.933871 systemd-networkd[1376]: cni0: Gained carrier Jan 13 21:19:15.935439 containerd[1438]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000018938), "name":"cbr0", "type":"bridge"} Jan 13 21:19:15.935439 containerd[1438]: delegateAdd: netconf sent to delegate plugin: Jan 13 21:19:15.950245 containerd[1438]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-13T21:19:15.950022206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:15.950245 containerd[1438]: time="2025-01-13T21:19:15.950071526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:15.950245 containerd[1438]: time="2025-01-13T21:19:15.950089686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:15.950245 containerd[1438]: time="2025-01-13T21:19:15.950169046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:15.964449 systemd[1]: run-containerd-runc-k8s.io-1791eb2fab5980384bf4d0fec094eee831b2c5c329336579a2eb79fc05547b92-runc.y7lPF5.mount: Deactivated successfully. Jan 13 21:19:15.973655 systemd[1]: Started cri-containerd-1791eb2fab5980384bf4d0fec094eee831b2c5c329336579a2eb79fc05547b92.scope - libcontainer container 1791eb2fab5980384bf4d0fec094eee831b2c5c329336579a2eb79fc05547b92. Jan 13 21:19:15.983588 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:19:15.999204 containerd[1438]: time="2025-01-13T21:19:15.999024585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4v788,Uid:6c44d2b0-957a-4f1f-8c9d-fe9bef4f2f65,Namespace:kube-system,Attempt:0,} returns sandbox id \"1791eb2fab5980384bf4d0fec094eee831b2c5c329336579a2eb79fc05547b92\"" Jan 13 21:19:15.999886 kubelet[2424]: E0113 21:19:15.999669 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:16.002735 containerd[1438]: time="2025-01-13T21:19:16.002692106Z" level=info msg="CreateContainer within sandbox \"1791eb2fab5980384bf4d0fec094eee831b2c5c329336579a2eb79fc05547b92\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:19:16.012972 containerd[1438]: time="2025-01-13T21:19:16.012929749Z" level=info msg="CreateContainer within sandbox \"1791eb2fab5980384bf4d0fec094eee831b2c5c329336579a2eb79fc05547b92\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6fe00d514e19ee91f578bfb70fa2cb5f9c51236d51fc09e933c5d936e3cc35d5\"" Jan 13 21:19:16.013554 containerd[1438]: time="2025-01-13T21:19:16.013515270Z" level=info msg="StartContainer for \"6fe00d514e19ee91f578bfb70fa2cb5f9c51236d51fc09e933c5d936e3cc35d5\"" Jan 13 21:19:16.038703 systemd[1]: Started cri-containerd-6fe00d514e19ee91f578bfb70fa2cb5f9c51236d51fc09e933c5d936e3cc35d5.scope - libcontainer container 6fe00d514e19ee91f578bfb70fa2cb5f9c51236d51fc09e933c5d936e3cc35d5. Jan 13 21:19:16.065458 containerd[1438]: time="2025-01-13T21:19:16.065324128Z" level=info msg="StartContainer for \"6fe00d514e19ee91f578bfb70fa2cb5f9c51236d51fc09e933c5d936e3cc35d5\" returns successfully" Jan 13 21:19:16.875057 kubelet[2424]: E0113 21:19:16.875018 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:16.875518 containerd[1438]: time="2025-01-13T21:19:16.875412169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xjpvx,Uid:571933e3-16f0-4e82-9867-e2d4431d18ed,Namespace:kube-system,Attempt:0,}" Jan 13 21:19:16.900518 kernel: cni0: port 2(veth0afd9743) entered blocking state Jan 13 21:19:16.900605 kernel: cni0: port 2(veth0afd9743) entered disabled state Jan 13 21:19:16.902028 kernel: veth0afd9743: entered allmulticast mode Jan 13 21:19:16.902834 kernel: veth0afd9743: entered promiscuous mode Jan 13 21:19:16.902883 kernel: cni0: port 2(veth0afd9743) entered blocking state Jan 13 21:19:16.904563 kernel: cni0: port 2(veth0afd9743) entered forwarding state Jan 13 21:19:16.905033 systemd-networkd[1376]: veth0afd9743: Link UP Jan 13 21:19:16.909101 systemd-networkd[1376]: veth0afd9743: Gained carrier Jan 13 21:19:16.910586 containerd[1438]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"} Jan 13 21:19:16.910586 containerd[1438]: delegateAdd: netconf sent to delegate plugin: Jan 13 21:19:16.929336 containerd[1438]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-13T21:19:16.929205868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:19:16.929336 containerd[1438]: time="2025-01-13T21:19:16.929269988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:19:16.929336 containerd[1438]: time="2025-01-13T21:19:16.929282068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:16.929525 containerd[1438]: time="2025-01-13T21:19:16.929371028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:19:16.940642 kubelet[2424]: E0113 21:19:16.940616 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:16.949677 systemd[1]: Started cri-containerd-aab5688a84d038ed7fc00d2e10a2151f490058a66109038a6991782fd5b560d2.scope - libcontainer container aab5688a84d038ed7fc00d2e10a2151f490058a66109038a6991782fd5b560d2. Jan 13 21:19:16.951989 kubelet[2424]: I0113 21:19:16.951929 2424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4v788" podStartSLOduration=16.951914756 podStartE2EDuration="16.951914756s" podCreationTimestamp="2025-01-13 21:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:19:16.951030236 +0000 UTC m=+22.181556883" watchObservedRunningTime="2025-01-13 21:19:16.951914756 +0000 UTC m=+22.182441403" Jan 13 21:19:16.966700 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:19:16.985080 containerd[1438]: time="2025-01-13T21:19:16.985031408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xjpvx,Uid:571933e3-16f0-4e82-9867-e2d4431d18ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"aab5688a84d038ed7fc00d2e10a2151f490058a66109038a6991782fd5b560d2\"" Jan 13 21:19:16.986101 kubelet[2424]: E0113 21:19:16.985991 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:16.992174 containerd[1438]: time="2025-01-13T21:19:16.992135970Z" level=info msg="CreateContainer within sandbox \"aab5688a84d038ed7fc00d2e10a2151f490058a66109038a6991782fd5b560d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:19:17.007910 containerd[1438]: time="2025-01-13T21:19:17.007859855Z" level=info msg="CreateContainer within sandbox \"aab5688a84d038ed7fc00d2e10a2151f490058a66109038a6991782fd5b560d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e2e9c40a352e439554df2ea5fe2be6e19344a33e15a70f1edf6ce5f56f14a0dd\"" Jan 13 21:19:17.008405 containerd[1438]: time="2025-01-13T21:19:17.008344495Z" level=info msg="StartContainer for \"e2e9c40a352e439554df2ea5fe2be6e19344a33e15a70f1edf6ce5f56f14a0dd\"" Jan 13 21:19:17.035693 systemd[1]: Started cri-containerd-e2e9c40a352e439554df2ea5fe2be6e19344a33e15a70f1edf6ce5f56f14a0dd.scope - libcontainer container e2e9c40a352e439554df2ea5fe2be6e19344a33e15a70f1edf6ce5f56f14a0dd. Jan 13 21:19:17.058736 containerd[1438]: time="2025-01-13T21:19:17.058654152Z" level=info msg="StartContainer for \"e2e9c40a352e439554df2ea5fe2be6e19344a33e15a70f1edf6ce5f56f14a0dd\" returns successfully" Jan 13 21:19:17.577653 systemd-networkd[1376]: cni0: Gained IPv6LL Jan 13 21:19:17.944216 kubelet[2424]: E0113 21:19:17.944113 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:17.944216 kubelet[2424]: E0113 21:19:17.944158 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:17.961648 systemd-networkd[1376]: veth0afd9743: Gained IPv6LL Jan 13 21:19:17.961918 systemd-networkd[1376]: veth1ef7ed09: Gained IPv6LL Jan 13 21:19:18.008069 kubelet[2424]: I0113 21:19:18.007960 2424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xjpvx" podStartSLOduration=18.007912581 podStartE2EDuration="18.007912581s" podCreationTimestamp="2025-01-13 21:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:19:18.007878941 +0000 UTC m=+23.238405588" watchObservedRunningTime="2025-01-13 21:19:18.007912581 +0000 UTC m=+23.238439228" Jan 13 21:19:18.945440 kubelet[2424]: E0113 21:19:18.945408 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:23.668134 systemd[1]: Started sshd@5-10.0.0.74:22-10.0.0.1:58856.service - OpenSSH per-connection server daemon (10.0.0.1:58856). Jan 13 21:19:23.711523 sshd[3380]: Accepted publickey for core from 10.0.0.1 port 58856 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:19:23.713206 sshd[3380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:23.716949 systemd-logind[1420]: New session 6 of user core. Jan 13 21:19:23.726667 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:19:23.868030 sshd[3380]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:23.872669 systemd[1]: sshd@5-10.0.0.74:22-10.0.0.1:58856.service: Deactivated successfully. Jan 13 21:19:23.875303 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:19:23.879707 systemd-logind[1420]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:19:23.881019 systemd-logind[1420]: Removed session 6. Jan 13 21:19:24.975207 kubelet[2424]: E0113 21:19:24.974842 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:25.955462 kubelet[2424]: E0113 21:19:25.955434 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:19:28.881065 systemd[1]: Started sshd@6-10.0.0.74:22-10.0.0.1:58868.service - OpenSSH per-connection server daemon (10.0.0.1:58868). Jan 13 21:19:28.920690 sshd[3421]: Accepted publickey for core from 10.0.0.1 port 58868 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:19:28.922004 sshd[3421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:28.925711 systemd-logind[1420]: New session 7 of user core. Jan 13 21:19:28.942641 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:19:29.050695 sshd[3421]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:29.054798 systemd[1]: sshd@6-10.0.0.74:22-10.0.0.1:58868.service: Deactivated successfully. Jan 13 21:19:29.056983 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:19:29.057692 systemd-logind[1420]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:19:29.058617 systemd-logind[1420]: Removed session 7. Jan 13 21:19:34.064089 systemd[1]: Started sshd@7-10.0.0.74:22-10.0.0.1:45472.service - OpenSSH per-connection server daemon (10.0.0.1:45472). Jan 13 21:19:34.103554 sshd[3459]: Accepted publickey for core from 10.0.0.1 port 45472 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:19:34.105024 sshd[3459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:34.109182 systemd-logind[1420]: New session 8 of user core. Jan 13 21:19:34.121630 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:19:34.227036 sshd[3459]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:34.237891 systemd[1]: sshd@7-10.0.0.74:22-10.0.0.1:45472.service: Deactivated successfully. Jan 13 21:19:34.239375 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:19:34.242684 systemd-logind[1420]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:19:34.252817 systemd[1]: Started sshd@8-10.0.0.74:22-10.0.0.1:45486.service - OpenSSH per-connection server daemon (10.0.0.1:45486). Jan 13 21:19:34.253812 systemd-logind[1420]: Removed session 8. Jan 13 21:19:34.287107 sshd[3475]: Accepted publickey for core from 10.0.0.1 port 45486 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:19:34.288329 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:34.292383 systemd-logind[1420]: New session 9 of user core. Jan 13 21:19:34.307643 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:19:34.443958 sshd[3475]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:34.452565 systemd[1]: sshd@8-10.0.0.74:22-10.0.0.1:45486.service: Deactivated successfully. Jan 13 21:19:34.454865 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:19:34.455531 systemd-logind[1420]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:19:34.462010 systemd[1]: Started sshd@9-10.0.0.74:22-10.0.0.1:45490.service - OpenSSH per-connection server daemon (10.0.0.1:45490). Jan 13 21:19:34.464738 systemd-logind[1420]: Removed session 9. Jan 13 21:19:34.511495 sshd[3487]: Accepted publickey for core from 10.0.0.1 port 45490 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:19:34.513049 sshd[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:34.517120 systemd-logind[1420]: New session 10 of user core. Jan 13 21:19:34.532652 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:19:34.639971 sshd[3487]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:34.643116 systemd[1]: sshd@9-10.0.0.74:22-10.0.0.1:45490.service: Deactivated successfully. Jan 13 21:19:34.644866 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:19:34.645502 systemd-logind[1420]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:19:34.646351 systemd-logind[1420]: Removed session 10. Jan 13 21:19:39.651027 systemd[1]: Started sshd@10-10.0.0.74:22-10.0.0.1:45504.service - OpenSSH per-connection server daemon (10.0.0.1:45504). Jan 13 21:19:39.690673 sshd[3522]: Accepted publickey for core from 10.0.0.1 port 45504 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:19:39.691888 sshd[3522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:39.696527 systemd-logind[1420]: New session 11 of user core. Jan 13 21:19:39.706673 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:19:39.817208 sshd[3522]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:39.823052 systemd[1]: sshd@10-10.0.0.74:22-10.0.0.1:45504.service: Deactivated successfully. Jan 13 21:19:39.824759 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:19:39.826347 systemd-logind[1420]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:19:39.837835 systemd[1]: Started sshd@11-10.0.0.74:22-10.0.0.1:45512.service - OpenSSH per-connection server daemon (10.0.0.1:45512). Jan 13 21:19:39.839444 systemd-logind[1420]: Removed session 11. Jan 13 21:19:39.871842 sshd[3537]: Accepted publickey for core from 10.0.0.1 port 45512 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:19:39.873260 sshd[3537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:39.877787 systemd-logind[1420]: New session 12 of user core. Jan 13 21:19:39.885656 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:19:40.151284 sshd[3537]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:40.163113 systemd[1]: sshd@11-10.0.0.74:22-10.0.0.1:45512.service: Deactivated successfully. Jan 13 21:19:40.164553 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:19:40.166422 systemd-logind[1420]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:19:40.167905 systemd[1]: Started sshd@12-10.0.0.74:22-10.0.0.1:45522.service - OpenSSH per-connection server daemon (10.0.0.1:45522). Jan 13 21:19:40.168815 systemd-logind[1420]: Removed session 12. Jan 13 21:19:40.209914 sshd[3549]: Accepted publickey for core from 10.0.0.1 port 45522 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:19:40.211261 sshd[3549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:40.216446 systemd-logind[1420]: New session 13 of user core. Jan 13 21:19:40.231690 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:19:41.399318 sshd[3549]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:41.421315 systemd[1]: sshd@12-10.0.0.74:22-10.0.0.1:45522.service: Deactivated successfully. Jan 13 21:19:41.424355 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:19:41.425937 systemd-logind[1420]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:19:41.436136 systemd[1]: Started sshd@13-10.0.0.74:22-10.0.0.1:45530.service - OpenSSH per-connection server daemon (10.0.0.1:45530). Jan 13 21:19:41.438262 systemd-logind[1420]: Removed session 13. Jan 13 21:19:41.489062 sshd[3590]: Accepted publickey for core from 10.0.0.1 port 45530 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:19:41.490757 sshd[3590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:41.495283 systemd-logind[1420]: New session 14 of user core. Jan 13 21:19:41.504728 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:19:41.716847 sshd[3590]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:41.726099 systemd[1]: sshd@13-10.0.0.74:22-10.0.0.1:45530.service: Deactivated successfully. Jan 13 21:19:41.728794 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:19:41.730223 systemd-logind[1420]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:19:41.744117 systemd[1]: Started sshd@14-10.0.0.74:22-10.0.0.1:45542.service - OpenSSH per-connection server daemon (10.0.0.1:45542). Jan 13 21:19:41.745593 systemd-logind[1420]: Removed session 14. Jan 13 21:19:41.778074 sshd[3603]: Accepted publickey for core from 10.0.0.1 port 45542 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:19:41.779575 sshd[3603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:41.783544 systemd-logind[1420]: New session 15 of user core. Jan 13 21:19:41.794653 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:19:41.901874 sshd[3603]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:41.905407 systemd[1]: sshd@14-10.0.0.74:22-10.0.0.1:45542.service: Deactivated successfully. Jan 13 21:19:41.907132 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:19:41.907810 systemd-logind[1420]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:19:41.909204 systemd-logind[1420]: Removed session 15. Jan 13 21:19:46.913249 systemd[1]: Started sshd@15-10.0.0.74:22-10.0.0.1:55296.service - OpenSSH per-connection server daemon (10.0.0.1:55296). Jan 13 21:19:46.953316 sshd[3642]: Accepted publickey for core from 10.0.0.1 port 55296 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:19:46.954519 sshd[3642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:46.958296 systemd-logind[1420]: New session 16 of user core. Jan 13 21:19:46.964650 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:19:47.067102 sshd[3642]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:47.070331 systemd[1]: sshd@15-10.0.0.74:22-10.0.0.1:55296.service: Deactivated successfully. Jan 13 21:19:47.073944 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:19:47.074512 systemd-logind[1420]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:19:47.075373 systemd-logind[1420]: Removed session 16. Jan 13 21:19:52.087296 systemd[1]: Started sshd@16-10.0.0.74:22-10.0.0.1:55310.service - OpenSSH per-connection server daemon (10.0.0.1:55310). Jan 13 21:19:52.127390 sshd[3679]: Accepted publickey for core from 10.0.0.1 port 55310 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:19:52.128638 sshd[3679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:52.132640 systemd-logind[1420]: New session 17 of user core. Jan 13 21:19:52.144639 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:19:52.247326 sshd[3679]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:52.250594 systemd[1]: sshd@16-10.0.0.74:22-10.0.0.1:55310.service: Deactivated successfully. Jan 13 21:19:52.253941 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:19:52.254461 systemd-logind[1420]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:19:52.255415 systemd-logind[1420]: Removed session 17. Jan 13 21:19:57.258085 systemd[1]: Started sshd@17-10.0.0.74:22-10.0.0.1:44316.service - OpenSSH per-connection server daemon (10.0.0.1:44316). Jan 13 21:19:57.296160 sshd[3717]: Accepted publickey for core from 10.0.0.1 port 44316 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:19:57.297667 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:19:57.302299 systemd-logind[1420]: New session 18 of user core. Jan 13 21:19:57.311698 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:19:57.424152 sshd[3717]: pam_unix(sshd:session): session closed for user core Jan 13 21:19:57.427715 systemd[1]: sshd@17-10.0.0.74:22-10.0.0.1:44316.service: Deactivated successfully. Jan 13 21:19:57.429340 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:19:57.432189 systemd-logind[1420]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:19:57.433595 systemd-logind[1420]: Removed session 18.