Feb 13 19:00:09.951491 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:00:09.951514 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:29:42 -00 2025 Feb 13 19:00:09.951524 kernel: KASLR enabled Feb 13 19:00:09.951529 kernel: efi: EFI v2.7 by EDK II Feb 13 19:00:09.951535 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 19:00:09.951540 kernel: random: crng init done Feb 13 19:00:09.951547 kernel: secureboot: Secure boot disabled Feb 13 19:00:09.951553 kernel: ACPI: Early table checksum verification disabled Feb 13 19:00:09.951559 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 19:00:09.951566 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:00:09.951572 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:00:09.951578 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:00:09.951584 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:00:09.951590 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:00:09.951597 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:00:09.951605 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:00:09.951611 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:00:09.951617 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:00:09.951623 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:00:09.951629 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:00:09.951635 kernel: NUMA: Failed to initialise from firmware Feb 13 19:00:09.951641 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:00:09.951647 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 19:00:09.951653 kernel: Zone ranges: Feb 13 19:00:09.951659 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:00:09.951667 kernel: DMA32 empty Feb 13 19:00:09.951673 kernel: Normal empty Feb 13 19:00:09.951679 kernel: Movable zone start for each node Feb 13 19:00:09.951684 kernel: Early memory node ranges Feb 13 19:00:09.951690 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 19:00:09.951696 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 19:00:09.951703 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 19:00:09.951709 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:00:09.951715 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:00:09.951721 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:00:09.951727 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:00:09.951733 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:00:09.951740 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:00:09.951746 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:00:09.951753 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:00:09.951761 kernel: psci: probing for conduit method from ACPI. Feb 13 19:00:09.951768 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:00:09.951774 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:00:09.951782 kernel: psci: Trusted OS migration not required Feb 13 19:00:09.951789 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:00:09.951795 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:00:09.951802 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:00:09.951808 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:00:09.951815 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:00:09.951822 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:00:09.951828 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:00:09.951834 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:00:09.951841 kernel: CPU features: detected: Spectre-v4 Feb 13 19:00:09.951848 kernel: CPU features: detected: Spectre-BHB Feb 13 19:00:09.951855 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:00:09.951861 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:00:09.951868 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:00:09.951874 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:00:09.951881 kernel: alternatives: applying boot alternatives Feb 13 19:00:09.951888 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 19:00:09.951895 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:00:09.951901 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:00:09.951908 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:00:09.951914 kernel: Fallback order for Node 0: 0 Feb 13 19:00:09.951922 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:00:09.951929 kernel: Policy zone: DMA Feb 13 19:00:09.951935 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:00:09.951942 kernel: software IO TLB: area num 4. Feb 13 19:00:09.951948 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:00:09.951955 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Feb 13 19:00:09.951961 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:00:09.951978 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:00:09.951985 kernel: rcu: RCU event tracing is enabled. Feb 13 19:00:09.951992 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:00:09.951999 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:00:09.952006 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:00:09.952016 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:00:09.952023 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:00:09.952029 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:00:09.952035 kernel: GICv3: 256 SPIs implemented Feb 13 19:00:09.952057 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:00:09.952063 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:00:09.952070 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:00:09.952077 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:00:09.952084 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:00:09.952091 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:00:09.952097 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:00:09.952106 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:00:09.952112 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:00:09.952119 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:00:09.952126 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:00:09.952132 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:00:09.952139 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:00:09.952145 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:00:09.952152 kernel: arm-pv: using stolen time PV Feb 13 19:00:09.952159 kernel: Console: colour dummy device 80x25 Feb 13 19:00:09.952165 kernel: ACPI: Core revision 20230628 Feb 13 19:00:09.952172 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:00:09.952181 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:00:09.952188 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:00:09.952195 kernel: landlock: Up and running. Feb 13 19:00:09.952202 kernel: SELinux: Initializing. Feb 13 19:00:09.952209 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:00:09.952216 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:00:09.952223 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:00:09.952230 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:00:09.952236 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:00:09.952244 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:00:09.952251 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:00:09.952258 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:00:09.952270 kernel: Remapping and enabling EFI services. Feb 13 19:00:09.952277 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:00:09.952284 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:00:09.952291 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:00:09.952297 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:00:09.952304 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:00:09.952312 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:00:09.952319 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:00:09.952330 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:00:09.952339 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:00:09.952346 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:00:09.952352 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:00:09.952359 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:00:09.952366 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:00:09.952373 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:00:09.952382 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:00:09.952388 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:00:09.952395 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:00:09.952402 kernel: SMP: Total of 4 processors activated. Feb 13 19:00:09.952409 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:00:09.952416 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:00:09.952423 kernel: CPU features: detected: Common not Private translations Feb 13 19:00:09.952430 kernel: CPU features: detected: CRC32 instructions Feb 13 19:00:09.952438 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:00:09.952445 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:00:09.952452 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:00:09.952459 kernel: CPU features: detected: Privileged Access Never Feb 13 19:00:09.952466 kernel: CPU features: detected: RAS Extension Support Feb 13 19:00:09.952473 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:00:09.952480 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:00:09.952487 kernel: alternatives: applying system-wide alternatives Feb 13 19:00:09.952494 kernel: devtmpfs: initialized Feb 13 19:00:09.952501 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:00:09.952509 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:00:09.952516 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:00:09.952523 kernel: SMBIOS 3.0.0 present. Feb 13 19:00:09.952530 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 19:00:09.952537 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:00:09.952544 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:00:09.952551 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:00:09.952558 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:00:09.952566 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:00:09.952574 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 19:00:09.952581 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:00:09.952588 kernel: cpuidle: using governor menu Feb 13 19:00:09.952595 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:00:09.952602 kernel: ASID allocator initialised with 32768 entries Feb 13 19:00:09.952609 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:00:09.952616 kernel: Serial: AMBA PL011 UART driver Feb 13 19:00:09.952623 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:00:09.952630 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:00:09.952638 kernel: Modules: 508880 pages in range for PLT usage Feb 13 19:00:09.952645 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:00:09.952652 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:00:09.952659 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:00:09.952666 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:00:09.952672 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:00:09.952679 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:00:09.952686 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:00:09.952693 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:00:09.952702 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:00:09.952709 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:00:09.952716 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:00:09.952723 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:00:09.952730 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:00:09.952736 kernel: ACPI: Interpreter enabled Feb 13 19:00:09.952743 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:00:09.952750 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:00:09.952757 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:00:09.952765 kernel: printk: console [ttyAMA0] enabled Feb 13 19:00:09.952772 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:00:09.952915 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:00:09.953013 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:00:09.953088 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:00:09.953154 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:00:09.953219 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:00:09.953232 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:00:09.953239 kernel: PCI host bridge to bus 0000:00 Feb 13 19:00:09.953323 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:00:09.953393 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:00:09.953455 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:00:09.953515 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:00:09.953597 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:00:09.953682 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:00:09.953750 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:00:09.953818 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:00:09.953885 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:00:09.953952 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:00:09.954030 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:00:09.954098 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:00:09.954162 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:00:09.954219 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:00:09.954285 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:00:09.954296 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:00:09.954304 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:00:09.954311 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:00:09.954318 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:00:09.954327 kernel: iommu: Default domain type: Translated Feb 13 19:00:09.954334 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:00:09.954342 kernel: efivars: Registered efivars operations Feb 13 19:00:09.954348 kernel: vgaarb: loaded Feb 13 19:00:09.954356 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:00:09.954363 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:00:09.954370 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:00:09.954377 kernel: pnp: PnP ACPI init Feb 13 19:00:09.954451 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:00:09.954463 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:00:09.954471 kernel: NET: Registered PF_INET protocol family Feb 13 19:00:09.954478 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:00:09.954485 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:00:09.954492 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:00:09.954499 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:00:09.954506 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:00:09.954513 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:00:09.954520 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:00:09.954529 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:00:09.954536 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:00:09.954543 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:00:09.954550 kernel: kvm [1]: HYP mode not available Feb 13 19:00:09.954557 kernel: Initialise system trusted keyrings Feb 13 19:00:09.954564 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:00:09.954571 kernel: Key type asymmetric registered Feb 13 19:00:09.954578 kernel: Asymmetric key parser 'x509' registered Feb 13 19:00:09.954585 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:00:09.954593 kernel: io scheduler mq-deadline registered Feb 13 19:00:09.954600 kernel: io scheduler kyber registered Feb 13 19:00:09.954607 kernel: io scheduler bfq registered Feb 13 19:00:09.954614 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:00:09.954621 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:00:09.954628 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:00:09.954695 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:00:09.954705 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:00:09.954712 kernel: thunder_xcv, ver 1.0 Feb 13 19:00:09.954721 kernel: thunder_bgx, ver 1.0 Feb 13 19:00:09.954728 kernel: nicpf, ver 1.0 Feb 13 19:00:09.954735 kernel: nicvf, ver 1.0 Feb 13 19:00:09.954809 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:00:09.954872 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:00:09 UTC (1739473209) Feb 13 19:00:09.954882 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:00:09.954890 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:00:09.954897 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:00:09.954906 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:00:09.954913 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:00:09.954920 kernel: Segment Routing with IPv6 Feb 13 19:00:09.954927 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:00:09.954934 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:00:09.954941 kernel: Key type dns_resolver registered Feb 13 19:00:09.954948 kernel: registered taskstats version 1 Feb 13 19:00:09.954955 kernel: Loading compiled-in X.509 certificates Feb 13 19:00:09.954969 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 987d382bd4f498c8030ef29b348ef5d6fcf1f0e3' Feb 13 19:00:09.954980 kernel: Key type .fscrypt registered Feb 13 19:00:09.954987 kernel: Key type fscrypt-provisioning registered Feb 13 19:00:09.954994 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:00:09.955001 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:00:09.955008 kernel: ima: No architecture policies found Feb 13 19:00:09.955015 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:00:09.955022 kernel: clk: Disabling unused clocks Feb 13 19:00:09.955029 kernel: Freeing unused kernel memory: 39936K Feb 13 19:00:09.955036 kernel: Run /init as init process Feb 13 19:00:09.955045 kernel: with arguments: Feb 13 19:00:09.955052 kernel: /init Feb 13 19:00:09.955059 kernel: with environment: Feb 13 19:00:09.955066 kernel: HOME=/ Feb 13 19:00:09.955072 kernel: TERM=linux Feb 13 19:00:09.955079 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:00:09.955088 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:00:09.955097 systemd[1]: Detected virtualization kvm. Feb 13 19:00:09.955107 systemd[1]: Detected architecture arm64. Feb 13 19:00:09.955114 systemd[1]: Running in initrd. Feb 13 19:00:09.955121 systemd[1]: No hostname configured, using default hostname. Feb 13 19:00:09.955129 systemd[1]: Hostname set to . Feb 13 19:00:09.955137 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:00:09.955144 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:00:09.955152 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:00:09.955161 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:00:09.955169 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:00:09.955177 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:00:09.955185 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:00:09.955193 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:00:09.955203 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:00:09.955211 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:00:09.955220 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:00:09.955228 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:00:09.955235 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:00:09.955243 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:00:09.955250 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:00:09.955258 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:00:09.955273 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:00:09.955280 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:00:09.955288 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:00:09.955298 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:00:09.955306 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:00:09.955313 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:00:09.955321 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:00:09.955329 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:00:09.955337 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:00:09.955345 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:00:09.955352 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:00:09.955361 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:00:09.955369 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:00:09.955377 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:00:09.955384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:00:09.955392 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:00:09.955400 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:00:09.955407 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:00:09.955438 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 19:00:09.955459 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:00:09.955469 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:00:09.955478 systemd-journald[239]: Journal started Feb 13 19:00:09.955502 systemd-journald[239]: Runtime Journal (/run/log/journal/fc844ed9c558439c8eb15225fea9a5d6) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:00:09.947883 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 19:00:09.957543 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:00:09.959994 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:00:09.963725 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:00:09.967831 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:00:09.968647 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 19:00:09.969570 kernel: Bridge firewalling registered Feb 13 19:00:09.969463 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:00:09.972159 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:00:09.973658 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:00:09.979581 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:00:09.982004 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:00:09.983524 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:00:09.989895 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:00:09.992067 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:00:09.993550 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:00:09.996936 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:00:10.006611 dracut-cmdline[275]: dracut-dracut-053 Feb 13 19:00:10.009134 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 19:00:10.024684 systemd-resolved[277]: Positive Trust Anchors: Feb 13 19:00:10.024702 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:00:10.024734 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:00:10.029437 systemd-resolved[277]: Defaulting to hostname 'linux'. Feb 13 19:00:10.030414 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:00:10.035524 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:00:10.082000 kernel: SCSI subsystem initialized Feb 13 19:00:10.086982 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:00:10.095006 kernel: iscsi: registered transport (tcp) Feb 13 19:00:10.107991 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:00:10.108028 kernel: QLogic iSCSI HBA Driver Feb 13 19:00:10.151146 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:00:10.159162 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:00:10.176999 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:00:10.177071 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:00:10.178991 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:00:10.225999 kernel: raid6: neonx8 gen() 15785 MB/s Feb 13 19:00:10.242988 kernel: raid6: neonx4 gen() 15814 MB/s Feb 13 19:00:10.259986 kernel: raid6: neonx2 gen() 13180 MB/s Feb 13 19:00:10.276984 kernel: raid6: neonx1 gen() 10541 MB/s Feb 13 19:00:10.293989 kernel: raid6: int64x8 gen() 6489 MB/s Feb 13 19:00:10.310986 kernel: raid6: int64x4 gen() 7341 MB/s Feb 13 19:00:10.327988 kernel: raid6: int64x2 gen() 6061 MB/s Feb 13 19:00:10.345111 kernel: raid6: int64x1 gen() 5044 MB/s Feb 13 19:00:10.345130 kernel: raid6: using algorithm neonx4 gen() 15814 MB/s Feb 13 19:00:10.363083 kernel: raid6: .... xor() 12418 MB/s, rmw enabled Feb 13 19:00:10.363100 kernel: raid6: using neon recovery algorithm Feb 13 19:00:10.368467 kernel: xor: measuring software checksum speed Feb 13 19:00:10.368486 kernel: 8regs : 21511 MB/sec Feb 13 19:00:10.369130 kernel: 32regs : 21664 MB/sec Feb 13 19:00:10.370386 kernel: arm64_neon : 27775 MB/sec Feb 13 19:00:10.370401 kernel: xor: using function: arm64_neon (27775 MB/sec) Feb 13 19:00:10.419987 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:00:10.432052 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:00:10.444178 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:00:10.457218 systemd-udevd[461]: Using default interface naming scheme 'v255'. Feb 13 19:00:10.461037 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:00:10.469176 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:00:10.483709 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Feb 13 19:00:10.520700 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:00:10.532177 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:00:10.575607 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:00:10.592257 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:00:10.613063 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:00:10.615149 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:00:10.619592 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:00:10.622255 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:00:10.636095 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:00:10.639709 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:00:10.649621 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:00:10.649738 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:00:10.649749 kernel: GPT:9289727 != 19775487 Feb 13 19:00:10.649759 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:00:10.649776 kernel: GPT:9289727 != 19775487 Feb 13 19:00:10.649787 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:00:10.649796 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:00:10.650883 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:00:10.654342 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:00:10.654475 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:00:10.658762 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:00:10.659943 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:00:10.660137 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:00:10.662743 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:00:10.675000 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (518) Feb 13 19:00:10.676541 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:00:10.680183 kernel: BTRFS: device fsid 55beb02a-1d0d-4a3e-812c-2737f0301ec8 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (521) Feb 13 19:00:10.687944 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:00:10.692886 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:00:10.699033 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:00:10.706841 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:00:10.711071 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:00:10.712405 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:00:10.726176 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:00:10.728227 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:00:10.734395 disk-uuid[552]: Primary Header is updated. Feb 13 19:00:10.734395 disk-uuid[552]: Secondary Entries is updated. Feb 13 19:00:10.734395 disk-uuid[552]: Secondary Header is updated. Feb 13 19:00:10.740774 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:00:10.757300 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:00:11.763998 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:00:11.764658 disk-uuid[553]: The operation has completed successfully. Feb 13 19:00:11.785590 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:00:11.785720 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:00:11.819210 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:00:11.822157 sh[573]: Success Feb 13 19:00:11.835998 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:00:11.869452 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:00:11.887550 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:00:11.891615 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:00:11.902865 kernel: BTRFS info (device dm-0): first mount of filesystem 55beb02a-1d0d-4a3e-812c-2737f0301ec8 Feb 13 19:00:11.902920 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:00:11.902932 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:00:11.905410 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:00:11.905432 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:00:11.909333 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:00:11.910753 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:00:11.919124 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:00:11.920772 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:00:11.930029 kernel: BTRFS info (device vda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 19:00:11.930069 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:00:11.930080 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:00:11.932998 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:00:11.940385 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:00:11.942588 kernel: BTRFS info (device vda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 19:00:11.947740 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:00:11.955180 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:00:12.032013 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:00:12.045181 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:00:12.051068 ignition[664]: Ignition 2.20.0 Feb 13 19:00:12.051081 ignition[664]: Stage: fetch-offline Feb 13 19:00:12.051123 ignition[664]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:00:12.051132 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:00:12.051307 ignition[664]: parsed url from cmdline: "" Feb 13 19:00:12.051311 ignition[664]: no config URL provided Feb 13 19:00:12.051325 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:00:12.051333 ignition[664]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:00:12.051364 ignition[664]: op(1): [started] loading QEMU firmware config module Feb 13 19:00:12.051369 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:00:12.058137 ignition[664]: op(1): [finished] loading QEMU firmware config module Feb 13 19:00:12.075915 systemd-networkd[767]: lo: Link UP Feb 13 19:00:12.075929 systemd-networkd[767]: lo: Gained carrier Feb 13 19:00:12.076750 systemd-networkd[767]: Enumeration completed Feb 13 19:00:12.076833 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:00:12.078095 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:00:12.078099 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:00:12.079101 systemd[1]: Reached target network.target - Network. Feb 13 19:00:12.079277 systemd-networkd[767]: eth0: Link UP Feb 13 19:00:12.079281 systemd-networkd[767]: eth0: Gained carrier Feb 13 19:00:12.079288 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:00:12.092021 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:00:12.099019 ignition[664]: parsing config with SHA512: 4f4a957026fd86cf26fa213c8f220350841dbf65d8ff09d8dca9f07b2bf3849f199ddcf9a802f1f7802facca853a67026c3daf3c45fa06684e7308ea77f7f0c6 Feb 13 19:00:12.103642 unknown[664]: fetched base config from "system" Feb 13 19:00:12.103653 unknown[664]: fetched user config from "qemu" Feb 13 19:00:12.104032 ignition[664]: fetch-offline: fetch-offline passed Feb 13 19:00:12.106043 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:00:12.104105 ignition[664]: Ignition finished successfully Feb 13 19:00:12.107382 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:00:12.118149 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:00:12.128783 ignition[774]: Ignition 2.20.0 Feb 13 19:00:12.128794 ignition[774]: Stage: kargs Feb 13 19:00:12.128959 ignition[774]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:00:12.128987 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:00:12.132639 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:00:12.129806 ignition[774]: kargs: kargs passed Feb 13 19:00:12.129848 ignition[774]: Ignition finished successfully Feb 13 19:00:12.145215 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:00:12.158571 ignition[782]: Ignition 2.20.0 Feb 13 19:00:12.158581 ignition[782]: Stage: disks Feb 13 19:00:12.158754 ignition[782]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:00:12.161357 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:00:12.158763 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:00:12.163029 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:00:12.159642 ignition[782]: disks: disks passed Feb 13 19:00:12.164684 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:00:12.159687 ignition[782]: Ignition finished successfully Feb 13 19:00:12.166922 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:00:12.168986 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:00:12.170574 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:00:12.187203 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:00:12.195679 systemd-resolved[277]: Detected conflict on linux IN A 10.0.0.86 Feb 13 19:00:12.195693 systemd-resolved[277]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Feb 13 19:00:12.198551 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:00:12.203175 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:00:12.205398 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:00:12.253985 kernel: EXT4-fs (vda9): mounted filesystem 005a6458-8fd3-46f1-ab43-85ef18df7ccd r/w with ordered data mode. Quota mode: none. Feb 13 19:00:12.254002 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:00:12.255317 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:00:12.268076 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:00:12.269951 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:00:12.272374 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:00:12.272426 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:00:12.272453 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:00:12.279480 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (802) Feb 13 19:00:12.279170 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:00:12.283689 kernel: BTRFS info (device vda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 19:00:12.283712 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:00:12.283722 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:00:12.283200 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:00:12.286773 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:00:12.288582 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:00:12.329209 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:00:12.333889 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:00:12.337433 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:00:12.340815 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:00:12.416141 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:00:12.429087 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:00:12.431446 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:00:12.436985 kernel: BTRFS info (device vda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 19:00:12.454392 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:00:12.456208 ignition[915]: INFO : Ignition 2.20.0 Feb 13 19:00:12.456208 ignition[915]: INFO : Stage: mount Feb 13 19:00:12.456208 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:00:12.456208 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:00:12.461722 ignition[915]: INFO : mount: mount passed Feb 13 19:00:12.461722 ignition[915]: INFO : Ignition finished successfully Feb 13 19:00:12.458327 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:00:12.464119 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:00:12.901557 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:00:12.910176 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:00:12.916992 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (930) Feb 13 19:00:12.919101 kernel: BTRFS info (device vda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 19:00:12.919118 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:00:12.919128 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:00:12.921982 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:00:12.923273 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:00:12.952546 ignition[947]: INFO : Ignition 2.20.0 Feb 13 19:00:12.952546 ignition[947]: INFO : Stage: files Feb 13 19:00:12.954225 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:00:12.954225 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:00:12.954225 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:00:12.959013 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:00:12.959013 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:00:12.959013 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:00:12.959013 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:00:12.959013 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:00:12.958870 unknown[947]: wrote ssh authorized keys file for user: core Feb 13 19:00:12.968027 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:00:12.968027 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:00:13.026524 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:00:13.472202 systemd-networkd[767]: eth0: Gained IPv6LL Feb 13 19:00:13.768447 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:00:13.768447 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:00:13.772447 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:00:13.772447 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:00:13.772447 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:00:13.772447 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:00:13.772447 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:00:13.772447 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:00:13.772447 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:00:13.772447 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:00:13.772447 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:00:13.772447 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:00:13.772447 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:00:13.772447 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:00:13.772447 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:00:13.936160 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:00:14.159707 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:00:14.159707 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:00:14.163930 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:00:14.163930 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:00:14.163930 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:00:14.163930 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 19:00:14.163930 ignition[947]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:00:14.163930 ignition[947]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:00:14.163930 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 19:00:14.163930 ignition[947]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:00:14.192145 ignition[947]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:00:14.196275 ignition[947]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:00:14.197990 ignition[947]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:00:14.197990 ignition[947]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:00:14.197990 ignition[947]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:00:14.197990 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:00:14.197990 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:00:14.197990 ignition[947]: INFO : files: files passed Feb 13 19:00:14.197990 ignition[947]: INFO : Ignition finished successfully Feb 13 19:00:14.198318 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:00:14.209150 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:00:14.211235 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:00:14.212906 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:00:14.213015 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:00:14.220402 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:00:14.224010 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:00:14.224010 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:00:14.227233 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:00:14.226858 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:00:14.228681 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:00:14.248273 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:00:14.271067 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:00:14.271177 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:00:14.273591 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:00:14.275522 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:00:14.277630 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:00:14.285147 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:00:14.299957 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:00:14.312165 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:00:14.322207 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:00:14.325087 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:00:14.326519 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:00:14.328468 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:00:14.328606 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:00:14.331383 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:00:14.333619 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:00:14.335446 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:00:14.337300 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:00:14.339331 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:00:14.341421 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:00:14.343550 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:00:14.345676 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:00:14.347840 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:00:14.349742 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:00:14.351459 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:00:14.351596 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:00:14.354119 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:00:14.356278 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:00:14.358491 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:00:14.362035 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:00:14.363411 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:00:14.363550 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:00:14.366599 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:00:14.366723 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:00:14.368910 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:00:14.370810 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:00:14.370921 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:00:14.373090 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:00:14.374843 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:00:14.376658 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:00:14.376752 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:00:14.378913 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:00:14.379007 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:00:14.380686 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:00:14.380798 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:00:14.382730 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:00:14.382839 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:00:14.395186 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:00:14.396152 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:00:14.396305 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:00:14.402269 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:00:14.403931 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:00:14.404088 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:00:14.407895 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:00:14.410669 ignition[1002]: INFO : Ignition 2.20.0 Feb 13 19:00:14.410669 ignition[1002]: INFO : Stage: umount Feb 13 19:00:14.410669 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:00:14.410669 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:00:14.410669 ignition[1002]: INFO : umount: umount passed Feb 13 19:00:14.410669 ignition[1002]: INFO : Ignition finished successfully Feb 13 19:00:14.408027 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:00:14.412351 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:00:14.412436 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:00:14.416299 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:00:14.416823 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:00:14.416912 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:00:14.420335 systemd[1]: Stopped target network.target - Network. Feb 13 19:00:14.422143 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:00:14.422208 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:00:14.424070 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:00:14.424122 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:00:14.425991 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:00:14.426033 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:00:14.428157 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:00:14.428203 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:00:14.430161 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:00:14.432387 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:00:14.438014 systemd-networkd[767]: eth0: DHCPv6 lease lost Feb 13 19:00:14.440232 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:00:14.442005 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:00:14.444756 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:00:14.445032 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:00:14.449658 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:00:14.449721 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:00:14.461129 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:00:14.462344 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:00:14.462415 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:00:14.464661 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:00:14.464713 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:00:14.466699 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:00:14.466746 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:00:14.469225 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:00:14.469284 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:00:14.471955 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:00:14.475828 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:00:14.475911 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:00:14.480225 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:00:14.480345 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:00:14.484038 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:00:14.484155 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:00:14.485864 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:00:14.486011 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:00:14.488873 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:00:14.488935 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:00:14.490171 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:00:14.490201 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:00:14.492111 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:00:14.492163 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:00:14.494918 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:00:14.494975 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:00:14.497168 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:00:14.497218 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:00:14.513158 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:00:14.514219 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:00:14.514308 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:00:14.516479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:00:14.516531 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:00:14.523162 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:00:14.524317 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:00:14.525720 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:00:14.528463 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:00:14.538776 systemd[1]: Switching root. Feb 13 19:00:14.569268 systemd-journald[239]: Journal stopped Feb 13 19:00:15.385783 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 19:00:15.385833 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:00:15.385846 kernel: SELinux: policy capability open_perms=1 Feb 13 19:00:15.385857 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:00:15.385869 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:00:15.385882 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:00:15.385895 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:00:15.385905 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:00:15.385914 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:00:15.385923 kernel: audit: type=1403 audit(1739473214.716:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:00:15.385934 systemd[1]: Successfully loaded SELinux policy in 33.174ms. Feb 13 19:00:15.385949 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.559ms. Feb 13 19:00:15.385960 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:00:15.386027 systemd[1]: Detected virtualization kvm. Feb 13 19:00:15.386041 systemd[1]: Detected architecture arm64. Feb 13 19:00:15.386051 systemd[1]: Detected first boot. Feb 13 19:00:15.386061 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:00:15.386071 zram_generator::config[1047]: No configuration found. Feb 13 19:00:15.386083 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:00:15.386093 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:00:15.386103 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:00:15.386115 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:00:15.386127 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:00:15.386138 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:00:15.386150 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:00:15.386160 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:00:15.386171 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:00:15.386181 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:00:15.386191 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:00:15.386201 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:00:15.386213 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:00:15.386224 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:00:15.386234 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:00:15.386244 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:00:15.386263 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:00:15.386275 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:00:15.386285 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:00:15.386295 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:00:15.386305 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:00:15.386316 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:00:15.386327 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:00:15.386337 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:00:15.386347 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:00:15.386358 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:00:15.386369 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:00:15.386380 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:00:15.386391 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:00:15.386403 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:00:15.386413 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:00:15.386423 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:00:15.386433 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:00:15.386444 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:00:15.386455 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:00:15.386464 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:00:15.386475 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:00:15.386485 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:00:15.386496 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:00:15.386506 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:00:15.386517 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:00:15.386527 systemd[1]: Reached target machines.target - Containers. Feb 13 19:00:15.386537 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:00:15.386547 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:00:15.386557 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:00:15.386568 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:00:15.386580 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:00:15.386591 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:00:15.386602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:00:15.386612 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:00:15.386622 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:00:15.386633 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:00:15.386643 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:00:15.386653 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:00:15.386665 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:00:15.386675 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:00:15.386685 kernel: loop: module loaded Feb 13 19:00:15.386695 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:00:15.386705 kernel: ACPI: bus type drm_connector registered Feb 13 19:00:15.386715 kernel: fuse: init (API version 7.39) Feb 13 19:00:15.386724 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:00:15.386735 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:00:15.386745 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:00:15.386755 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:00:15.386766 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:00:15.386776 systemd[1]: Stopped verity-setup.service. Feb 13 19:00:15.386811 systemd-journald[1111]: Collecting audit messages is disabled. Feb 13 19:00:15.386834 systemd-journald[1111]: Journal started Feb 13 19:00:15.386861 systemd-journald[1111]: Runtime Journal (/run/log/journal/fc844ed9c558439c8eb15225fea9a5d6) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:00:15.129059 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:00:15.155617 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:00:15.156074 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:00:15.390906 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:00:15.391634 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:00:15.392978 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:00:15.394438 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:00:15.395654 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:00:15.396987 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:00:15.398297 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:00:15.399573 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:00:15.401115 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:00:15.402730 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:00:15.402874 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:00:15.404405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:00:15.404558 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:00:15.406122 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:00:15.406268 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:00:15.407834 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:00:15.407996 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:00:15.409589 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:00:15.409742 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:00:15.411194 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:00:15.411342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:00:15.413107 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:00:15.414578 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:00:15.416325 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:00:15.429905 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:00:15.445139 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:00:15.447636 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:00:15.448952 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:00:15.449007 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:00:15.451168 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:00:15.453862 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:00:15.456422 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:00:15.457788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:00:15.459926 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:00:15.462430 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:00:15.463921 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:00:15.466284 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:00:15.467597 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:00:15.470851 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:00:15.474522 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:00:15.478668 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:00:15.483284 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:00:15.485008 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:00:15.486190 systemd-journald[1111]: Time spent on flushing to /var/log/journal/fc844ed9c558439c8eb15225fea9a5d6 is 12.998ms for 861 entries. Feb 13 19:00:15.486190 systemd-journald[1111]: System Journal (/var/log/journal/fc844ed9c558439c8eb15225fea9a5d6) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:00:15.505483 systemd-journald[1111]: Received client request to flush runtime journal. Feb 13 19:00:15.491136 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:00:15.492962 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:00:15.494772 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:00:15.500688 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:00:15.510174 kernel: loop0: detected capacity change from 0 to 116784 Feb 13 19:00:15.511729 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:00:15.516565 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:00:15.519025 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:00:15.521429 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:00:15.528983 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:00:15.536775 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:00:15.538073 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:00:15.540921 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:00:15.556797 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:00:15.565167 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:00:15.571594 kernel: loop1: detected capacity change from 0 to 113552 Feb 13 19:00:15.592991 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Feb 13 19:00:15.593010 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Feb 13 19:00:15.602020 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:00:15.605997 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 19:00:15.644188 kernel: loop3: detected capacity change from 0 to 116784 Feb 13 19:00:15.651990 kernel: loop4: detected capacity change from 0 to 113552 Feb 13 19:00:15.676002 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 19:00:15.689989 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:00:15.691426 (sd-merge)[1182]: Merged extensions into '/usr'. Feb 13 19:00:15.695053 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:00:15.695074 systemd[1]: Reloading... Feb 13 19:00:15.757017 zram_generator::config[1208]: No configuration found. Feb 13 19:00:15.814835 ldconfig[1153]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:00:15.871821 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:00:15.909407 systemd[1]: Reloading finished in 213 ms. Feb 13 19:00:15.938438 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:00:15.940212 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:00:15.959273 systemd[1]: Starting ensure-sysext.service... Feb 13 19:00:15.961901 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:00:15.971252 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:00:15.971271 systemd[1]: Reloading... Feb 13 19:00:15.980713 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:00:15.980924 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:00:15.981733 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:00:15.981938 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Feb 13 19:00:15.982005 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Feb 13 19:00:15.984609 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:00:15.984623 systemd-tmpfiles[1244]: Skipping /boot Feb 13 19:00:15.993361 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:00:15.993378 systemd-tmpfiles[1244]: Skipping /boot Feb 13 19:00:16.030018 zram_generator::config[1269]: No configuration found. Feb 13 19:00:16.127422 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:00:16.163726 systemd[1]: Reloading finished in 192 ms. Feb 13 19:00:16.184322 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:00:16.198530 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:00:16.206158 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:00:16.208654 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:00:16.211173 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:00:16.214283 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:00:16.218203 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:00:16.223066 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:00:16.227446 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:00:16.229045 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:00:16.232057 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:00:16.235435 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:00:16.236649 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:00:16.237412 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:00:16.238989 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:00:16.240830 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:00:16.240989 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:00:16.251506 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:00:16.251726 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:00:16.254769 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:00:16.264031 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:00:16.267957 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:00:16.271770 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Feb 13 19:00:16.281974 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:00:16.285159 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:00:16.288004 augenrules[1342]: No rules Feb 13 19:00:16.287490 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:00:16.291149 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:00:16.292334 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:00:16.294632 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:00:16.306844 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:00:16.310232 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:00:16.313565 systemd[1]: Finished ensure-sysext.service. Feb 13 19:00:16.314720 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:00:16.314881 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:00:16.317417 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:00:16.321668 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:00:16.321811 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:00:16.323349 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:00:16.323480 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:00:16.324833 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:00:16.324962 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:00:16.330987 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1368) Feb 13 19:00:16.334611 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:00:16.334766 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:00:16.363411 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:00:16.368189 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:00:16.378832 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:00:16.394614 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:00:16.403698 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:00:16.404787 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:00:16.404861 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:00:16.406700 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:00:16.411075 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:00:16.411323 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:00:16.424388 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:00:16.434889 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:00:16.458232 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:00:16.471791 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:00:16.474225 systemd-networkd[1385]: lo: Link UP Feb 13 19:00:16.474233 systemd-networkd[1385]: lo: Gained carrier Feb 13 19:00:16.478674 systemd-networkd[1385]: Enumeration completed Feb 13 19:00:16.479009 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:00:16.482976 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:00:16.482982 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:00:16.483788 systemd-networkd[1385]: eth0: Link UP Feb 13 19:00:16.483798 systemd-networkd[1385]: eth0: Gained carrier Feb 13 19:00:16.483812 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:00:16.494930 systemd-resolved[1310]: Positive Trust Anchors: Feb 13 19:00:16.494950 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:00:16.494991 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:00:16.497210 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:00:16.498572 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:00:16.500042 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:00:16.502200 systemd-resolved[1310]: Defaulting to hostname 'linux'. Feb 13 19:00:16.504130 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:00:16.505337 systemd[1]: Reached target network.target - Network. Feb 13 19:00:16.506282 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:00:16.514088 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:00:16.514895 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Feb 13 19:00:16.515814 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:00:16.082389 systemd-resolved[1310]: Clock change detected. Flushing caches. Feb 13 19:00:16.087902 systemd-journald[1111]: Time jumped backwards, rotating. Feb 13 19:00:16.082499 systemd-timesyncd[1386]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:00:16.082570 systemd-timesyncd[1386]: Initial clock synchronization to Thu 2025-02-13 19:00:16.082339 UTC. Feb 13 19:00:16.101325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:00:16.117870 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:00:16.119581 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:00:16.120771 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:00:16.121968 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:00:16.123301 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:00:16.124759 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:00:16.125954 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:00:16.127211 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:00:16.128650 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:00:16.128693 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:00:16.129587 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:00:16.131480 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:00:16.133965 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:00:16.151317 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:00:16.153810 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:00:16.155639 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:00:16.156880 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:00:16.157902 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:00:16.158952 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:00:16.158981 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:00:16.160055 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:00:16.162307 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:00:16.163401 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:00:16.166181 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:00:16.173960 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:00:16.175348 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:00:16.177438 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:00:16.181379 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:00:16.183595 jq[1415]: false Feb 13 19:00:16.184227 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:00:16.187050 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:00:16.195029 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:00:16.207617 extend-filesystems[1416]: Found loop3 Feb 13 19:00:16.207617 extend-filesystems[1416]: Found loop4 Feb 13 19:00:16.207617 extend-filesystems[1416]: Found loop5 Feb 13 19:00:16.207617 extend-filesystems[1416]: Found vda Feb 13 19:00:16.207617 extend-filesystems[1416]: Found vda1 Feb 13 19:00:16.207617 extend-filesystems[1416]: Found vda2 Feb 13 19:00:16.207617 extend-filesystems[1416]: Found vda3 Feb 13 19:00:16.207617 extend-filesystems[1416]: Found usr Feb 13 19:00:16.207617 extend-filesystems[1416]: Found vda4 Feb 13 19:00:16.207617 extend-filesystems[1416]: Found vda6 Feb 13 19:00:16.207617 extend-filesystems[1416]: Found vda7 Feb 13 19:00:16.207617 extend-filesystems[1416]: Found vda9 Feb 13 19:00:16.207617 extend-filesystems[1416]: Checking size of /dev/vda9 Feb 13 19:00:16.206878 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:00:16.236417 extend-filesystems[1416]: Resized partition /dev/vda9 Feb 13 19:00:16.215667 dbus-daemon[1414]: [system] SELinux support is enabled Feb 13 19:00:16.207414 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:00:16.209491 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:00:16.241065 jq[1433]: true Feb 13 19:00:16.214962 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:00:16.245041 extend-filesystems[1437]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:00:16.265331 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1360) Feb 13 19:00:16.265379 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:00:16.221007 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:00:16.225928 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:00:16.235724 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:00:16.235880 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:00:16.236172 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:00:16.236327 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:00:16.242432 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:00:16.242615 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:00:16.266420 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:00:16.266449 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:00:16.267508 (ntainerd)[1442]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:00:16.268562 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:00:16.268588 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:00:16.275141 jq[1441]: true Feb 13 19:00:16.281096 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:00:16.281586 systemd-logind[1424]: New seat seat0. Feb 13 19:00:16.285440 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:00:16.293781 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:00:16.294024 tar[1440]: linux-arm64/helm Feb 13 19:00:16.313582 extend-filesystems[1437]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:00:16.313582 extend-filesystems[1437]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:00:16.313582 extend-filesystems[1437]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:00:16.309331 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:00:16.321538 extend-filesystems[1416]: Resized filesystem in /dev/vda9 Feb 13 19:00:16.311305 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:00:16.324586 update_engine[1431]: I20250213 19:00:16.321157 1431 main.cc:92] Flatcar Update Engine starting Feb 13 19:00:16.326586 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:00:16.329740 update_engine[1431]: I20250213 19:00:16.329456 1431 update_check_scheduler.cc:74] Next update check in 8m57s Feb 13 19:00:16.338505 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:00:16.348479 bash[1469]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:00:16.351793 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:00:16.354371 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:00:16.402984 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:00:16.505745 containerd[1442]: time="2025-02-13T19:00:16.505188292Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:00:16.516245 sshd_keygen[1436]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:00:16.533374 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:00:16.540094 containerd[1442]: time="2025-02-13T19:00:16.540026412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:00:16.541745 containerd[1442]: time="2025-02-13T19:00:16.541664972Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:00:16.541745 containerd[1442]: time="2025-02-13T19:00:16.541713012Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:00:16.541745 containerd[1442]: time="2025-02-13T19:00:16.541731092Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:00:16.541919 containerd[1442]: time="2025-02-13T19:00:16.541899092Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:00:16.541944 containerd[1442]: time="2025-02-13T19:00:16.541924252Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:00:16.541997 containerd[1442]: time="2025-02-13T19:00:16.541981252Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:00:16.542016 containerd[1442]: time="2025-02-13T19:00:16.541997212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:00:16.542200 containerd[1442]: time="2025-02-13T19:00:16.542169612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:00:16.542200 containerd[1442]: time="2025-02-13T19:00:16.542189332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:00:16.542297 containerd[1442]: time="2025-02-13T19:00:16.542204092Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:00:16.542297 containerd[1442]: time="2025-02-13T19:00:16.542213532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:00:16.542343 containerd[1442]: time="2025-02-13T19:00:16.542324812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:00:16.542573 containerd[1442]: time="2025-02-13T19:00:16.542539012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:00:16.542669 containerd[1442]: time="2025-02-13T19:00:16.542653132Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:00:16.542690 containerd[1442]: time="2025-02-13T19:00:16.542670452Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:00:16.542762 containerd[1442]: time="2025-02-13T19:00:16.542749092Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:00:16.542809 containerd[1442]: time="2025-02-13T19:00:16.542797452Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:00:16.547524 containerd[1442]: time="2025-02-13T19:00:16.547350212Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:00:16.547524 containerd[1442]: time="2025-02-13T19:00:16.547408652Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:00:16.547524 containerd[1442]: time="2025-02-13T19:00:16.547425452Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:00:16.547524 containerd[1442]: time="2025-02-13T19:00:16.547442012Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:00:16.547524 containerd[1442]: time="2025-02-13T19:00:16.547462972Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:00:16.547674 containerd[1442]: time="2025-02-13T19:00:16.547623332Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:00:16.547854 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:00:16.549168 containerd[1442]: time="2025-02-13T19:00:16.547851652Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:00:16.549168 containerd[1442]: time="2025-02-13T19:00:16.547948252Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:00:16.549168 containerd[1442]: time="2025-02-13T19:00:16.547963972Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:00:16.549168 containerd[1442]: time="2025-02-13T19:00:16.547978132Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:00:16.549168 containerd[1442]: time="2025-02-13T19:00:16.547992732Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:00:16.549168 containerd[1442]: time="2025-02-13T19:00:16.548005932Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:00:16.549168 containerd[1442]: time="2025-02-13T19:00:16.548018572Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:00:16.549168 containerd[1442]: time="2025-02-13T19:00:16.548032252Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:00:16.549168 containerd[1442]: time="2025-02-13T19:00:16.548050692Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:00:16.549168 containerd[1442]: time="2025-02-13T19:00:16.548064332Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:00:16.549168 containerd[1442]: time="2025-02-13T19:00:16.548076812Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:00:16.549168 containerd[1442]: time="2025-02-13T19:00:16.548089972Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:00:16.549168 containerd[1442]: time="2025-02-13T19:00:16.548113932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:00:16.549168 containerd[1442]: time="2025-02-13T19:00:16.548128812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:00:16.549954 containerd[1442]: time="2025-02-13T19:00:16.548142092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:00:16.549954 containerd[1442]: time="2025-02-13T19:00:16.548155252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:00:16.549954 containerd[1442]: time="2025-02-13T19:00:16.548166972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:00:16.549954 containerd[1442]: time="2025-02-13T19:00:16.548181012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:00:16.549954 containerd[1442]: time="2025-02-13T19:00:16.548192452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:00:16.549954 containerd[1442]: time="2025-02-13T19:00:16.548204532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:00:16.549954 containerd[1442]: time="2025-02-13T19:00:16.548217932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:00:16.549954 containerd[1442]: time="2025-02-13T19:00:16.548254372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:00:16.549954 containerd[1442]: time="2025-02-13T19:00:16.548268292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:00:16.549954 containerd[1442]: time="2025-02-13T19:00:16.548280412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:00:16.549954 containerd[1442]: time="2025-02-13T19:00:16.548292532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:00:16.549954 containerd[1442]: time="2025-02-13T19:00:16.548307652Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:00:16.549954 containerd[1442]: time="2025-02-13T19:00:16.548332692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:00:16.549954 containerd[1442]: time="2025-02-13T19:00:16.548346492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:00:16.549954 containerd[1442]: time="2025-02-13T19:00:16.548357012Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:00:16.550226 containerd[1442]: time="2025-02-13T19:00:16.548541892Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:00:16.550226 containerd[1442]: time="2025-02-13T19:00:16.548563212Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:00:16.550226 containerd[1442]: time="2025-02-13T19:00:16.548574252Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:00:16.550226 containerd[1442]: time="2025-02-13T19:00:16.548586652Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:00:16.550226 containerd[1442]: time="2025-02-13T19:00:16.548595732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:00:16.550226 containerd[1442]: time="2025-02-13T19:00:16.548613412Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:00:16.550226 containerd[1442]: time="2025-02-13T19:00:16.548624652Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:00:16.550226 containerd[1442]: time="2025-02-13T19:00:16.548638132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:00:16.550375 containerd[1442]: time="2025-02-13T19:00:16.548998492Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:00:16.550375 containerd[1442]: time="2025-02-13T19:00:16.549046732Z" level=info msg="Connect containerd service" Feb 13 19:00:16.550375 containerd[1442]: time="2025-02-13T19:00:16.549086052Z" level=info msg="using legacy CRI server" Feb 13 19:00:16.550375 containerd[1442]: time="2025-02-13T19:00:16.549098332Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:00:16.550375 containerd[1442]: time="2025-02-13T19:00:16.549359492Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:00:16.550375 containerd[1442]: time="2025-02-13T19:00:16.550024212Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:00:16.550568 containerd[1442]: time="2025-02-13T19:00:16.550486932Z" level=info msg="Start subscribing containerd event" Feb 13 19:00:16.550568 containerd[1442]: time="2025-02-13T19:00:16.550553372Z" level=info msg="Start recovering state" Feb 13 19:00:16.550729 containerd[1442]: time="2025-02-13T19:00:16.550671252Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:00:16.550729 containerd[1442]: time="2025-02-13T19:00:16.550722492Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:00:16.550779 containerd[1442]: time="2025-02-13T19:00:16.550671772Z" level=info msg="Start event monitor" Feb 13 19:00:16.550779 containerd[1442]: time="2025-02-13T19:00:16.550758812Z" level=info msg="Start snapshots syncer" Feb 13 19:00:16.550779 containerd[1442]: time="2025-02-13T19:00:16.550769052Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:00:16.550779 containerd[1442]: time="2025-02-13T19:00:16.550775372Z" level=info msg="Start streaming server" Feb 13 19:00:16.552110 containerd[1442]: time="2025-02-13T19:00:16.550894212Z" level=info msg="containerd successfully booted in 0.047051s" Feb 13 19:00:16.550996 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:00:16.555061 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:00:16.557275 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:00:16.565617 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:00:16.577555 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:00:16.588656 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:00:16.591282 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:00:16.592616 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:00:16.678321 tar[1440]: linux-arm64/LICENSE Feb 13 19:00:16.678418 tar[1440]: linux-arm64/README.md Feb 13 19:00:16.690501 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:00:17.774410 systemd-networkd[1385]: eth0: Gained IPv6LL Feb 13 19:00:17.777324 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:00:17.779071 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:00:17.791538 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:00:17.794220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:00:17.796522 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:00:17.814341 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:00:17.814533 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:00:17.816365 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:00:17.818814 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:00:18.323560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:00:18.325845 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:00:18.328131 (kubelet)[1524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:00:18.330352 systemd[1]: Startup finished in 605ms (kernel) + 4.980s (initrd) + 4.089s (userspace) = 9.676s. Feb 13 19:00:18.344146 agetty[1498]: failed to open credentials directory Feb 13 19:00:18.344863 agetty[1499]: failed to open credentials directory Feb 13 19:00:18.855005 kubelet[1524]: E0213 19:00:18.854903 1524 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:00:18.857458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:00:18.857623 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:00:21.753017 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:00:21.754186 systemd[1]: Started sshd@0-10.0.0.86:22-10.0.0.1:43050.service - OpenSSH per-connection server daemon (10.0.0.1:43050). Feb 13 19:00:21.874939 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 43050 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:00:21.876823 sshd-session[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:00:21.890727 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:00:21.897534 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:00:21.899527 systemd-logind[1424]: New session 1 of user core. Feb 13 19:00:21.911265 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:00:21.913804 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:00:21.923370 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:00:22.021206 systemd[1543]: Queued start job for default target default.target. Feb 13 19:00:22.029301 systemd[1543]: Created slice app.slice - User Application Slice. Feb 13 19:00:22.029354 systemd[1543]: Reached target paths.target - Paths. Feb 13 19:00:22.029367 systemd[1543]: Reached target timers.target - Timers. Feb 13 19:00:22.030721 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:00:22.044382 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:00:22.044510 systemd[1543]: Reached target sockets.target - Sockets. Feb 13 19:00:22.044524 systemd[1543]: Reached target basic.target - Basic System. Feb 13 19:00:22.044562 systemd[1543]: Reached target default.target - Main User Target. Feb 13 19:00:22.044589 systemd[1543]: Startup finished in 114ms. Feb 13 19:00:22.044805 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:00:22.046269 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:00:22.150943 systemd[1]: Started sshd@1-10.0.0.86:22-10.0.0.1:43052.service - OpenSSH per-connection server daemon (10.0.0.1:43052). Feb 13 19:00:22.189922 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 43052 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:00:22.191171 sshd-session[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:00:22.196260 systemd-logind[1424]: New session 2 of user core. Feb 13 19:00:22.202415 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:00:22.254586 sshd[1556]: Connection closed by 10.0.0.1 port 43052 Feb 13 19:00:22.254938 sshd-session[1554]: pam_unix(sshd:session): session closed for user core Feb 13 19:00:22.266923 systemd[1]: sshd@1-10.0.0.86:22-10.0.0.1:43052.service: Deactivated successfully. Feb 13 19:00:22.268373 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:00:22.269725 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:00:22.271034 systemd[1]: Started sshd@2-10.0.0.86:22-10.0.0.1:43058.service - OpenSSH per-connection server daemon (10.0.0.1:43058). Feb 13 19:00:22.271996 systemd-logind[1424]: Removed session 2. Feb 13 19:00:22.314375 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 43058 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:00:22.315833 sshd-session[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:00:22.320118 systemd-logind[1424]: New session 3 of user core. Feb 13 19:00:22.335444 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:00:22.386568 sshd[1563]: Connection closed by 10.0.0.1 port 43058 Feb 13 19:00:22.386441 sshd-session[1561]: pam_unix(sshd:session): session closed for user core Feb 13 19:00:22.399704 systemd[1]: sshd@2-10.0.0.86:22-10.0.0.1:43058.service: Deactivated successfully. Feb 13 19:00:22.402547 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:00:22.404099 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:00:22.405676 systemd[1]: Started sshd@3-10.0.0.86:22-10.0.0.1:43062.service - OpenSSH per-connection server daemon (10.0.0.1:43062). Feb 13 19:00:22.407432 systemd-logind[1424]: Removed session 3. Feb 13 19:00:22.463199 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 43062 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:00:22.464571 sshd-session[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:00:22.468865 systemd-logind[1424]: New session 4 of user core. Feb 13 19:00:22.476474 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:00:22.534521 sshd[1570]: Connection closed by 10.0.0.1 port 43062 Feb 13 19:00:22.535045 sshd-session[1568]: pam_unix(sshd:session): session closed for user core Feb 13 19:00:22.543872 systemd[1]: sshd@3-10.0.0.86:22-10.0.0.1:43062.service: Deactivated successfully. Feb 13 19:00:22.545554 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:00:22.546783 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:00:22.548155 systemd[1]: Started sshd@4-10.0.0.86:22-10.0.0.1:59714.service - OpenSSH per-connection server daemon (10.0.0.1:59714). Feb 13 19:00:22.549084 systemd-logind[1424]: Removed session 4. Feb 13 19:00:22.602362 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 59714 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:00:22.604046 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:00:22.611628 systemd-logind[1424]: New session 5 of user core. Feb 13 19:00:22.627470 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:00:22.690603 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:00:22.691168 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:00:23.113540 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:00:23.113628 (dockerd)[1598]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:00:23.462643 dockerd[1598]: time="2025-02-13T19:00:23.462506532Z" level=info msg="Starting up" Feb 13 19:00:23.621644 dockerd[1598]: time="2025-02-13T19:00:23.621601492Z" level=info msg="Loading containers: start." Feb 13 19:00:23.773285 kernel: Initializing XFRM netlink socket Feb 13 19:00:23.836116 systemd-networkd[1385]: docker0: Link UP Feb 13 19:00:23.883805 dockerd[1598]: time="2025-02-13T19:00:23.883760052Z" level=info msg="Loading containers: done." Feb 13 19:00:23.904309 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4234796219-merged.mount: Deactivated successfully. Feb 13 19:00:23.912144 dockerd[1598]: time="2025-02-13T19:00:23.912086772Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:00:23.912277 dockerd[1598]: time="2025-02-13T19:00:23.912206132Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:00:23.912850 dockerd[1598]: time="2025-02-13T19:00:23.912535412Z" level=info msg="Daemon has completed initialization" Feb 13 19:00:23.947844 dockerd[1598]: time="2025-02-13T19:00:23.947776852Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:00:23.947969 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:00:24.637205 containerd[1442]: time="2025-02-13T19:00:24.637157492Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:00:25.276651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount48655758.mount: Deactivated successfully. Feb 13 19:00:26.735123 containerd[1442]: time="2025-02-13T19:00:26.735070132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:26.736113 containerd[1442]: time="2025-02-13T19:00:26.735895612Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 19:00:26.738865 containerd[1442]: time="2025-02-13T19:00:26.737445732Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:26.741404 containerd[1442]: time="2025-02-13T19:00:26.741356492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:26.742842 containerd[1442]: time="2025-02-13T19:00:26.742796532Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.10559708s" Feb 13 19:00:26.742889 containerd[1442]: time="2025-02-13T19:00:26.742845252Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 19:00:26.764365 containerd[1442]: time="2025-02-13T19:00:26.764319612Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:00:28.358448 containerd[1442]: time="2025-02-13T19:00:28.358396412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:28.359554 containerd[1442]: time="2025-02-13T19:00:28.359477652Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 19:00:28.360800 containerd[1442]: time="2025-02-13T19:00:28.360752412Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:28.363574 containerd[1442]: time="2025-02-13T19:00:28.363513972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:28.364747 containerd[1442]: time="2025-02-13T19:00:28.364700652Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.60033352s" Feb 13 19:00:28.364747 containerd[1442]: time="2025-02-13T19:00:28.364735692Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 19:00:28.384079 containerd[1442]: time="2025-02-13T19:00:28.384023572Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:00:29.107912 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:00:29.117456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:00:29.209630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:00:29.214795 (kubelet)[1885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:00:29.322140 kubelet[1885]: E0213 19:00:29.322085 1885 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:00:29.326139 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:00:29.326327 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:00:29.490059 containerd[1442]: time="2025-02-13T19:00:29.489952932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:29.491395 containerd[1442]: time="2025-02-13T19:00:29.491351612Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 19:00:29.492518 containerd[1442]: time="2025-02-13T19:00:29.492474372Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:29.495194 containerd[1442]: time="2025-02-13T19:00:29.495136692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:29.497028 containerd[1442]: time="2025-02-13T19:00:29.496990412Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.11292964s" Feb 13 19:00:29.497091 containerd[1442]: time="2025-02-13T19:00:29.497028732Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 19:00:29.515459 containerd[1442]: time="2025-02-13T19:00:29.515425572Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:00:30.561161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753424790.mount: Deactivated successfully. Feb 13 19:00:30.763121 containerd[1442]: time="2025-02-13T19:00:30.763060452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:30.766620 containerd[1442]: time="2025-02-13T19:00:30.766242532Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 19:00:30.767885 containerd[1442]: time="2025-02-13T19:00:30.767847052Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:30.770063 containerd[1442]: time="2025-02-13T19:00:30.770016532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:30.771339 containerd[1442]: time="2025-02-13T19:00:30.770928172Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.25535048s" Feb 13 19:00:30.771339 containerd[1442]: time="2025-02-13T19:00:30.770966012Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:00:30.790336 containerd[1442]: time="2025-02-13T19:00:30.790291732Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:00:31.405898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409332064.mount: Deactivated successfully. Feb 13 19:00:31.915958 containerd[1442]: time="2025-02-13T19:00:31.915908092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:31.917090 containerd[1442]: time="2025-02-13T19:00:31.917040772Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 19:00:31.918264 containerd[1442]: time="2025-02-13T19:00:31.917908292Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:31.921893 containerd[1442]: time="2025-02-13T19:00:31.921848932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:31.922762 containerd[1442]: time="2025-02-13T19:00:31.922602132Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.13226924s" Feb 13 19:00:31.922762 containerd[1442]: time="2025-02-13T19:00:31.922634772Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:00:31.943935 containerd[1442]: time="2025-02-13T19:00:31.943723572Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:00:32.588436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount736510163.mount: Deactivated successfully. Feb 13 19:00:32.602951 containerd[1442]: time="2025-02-13T19:00:32.602445412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:32.604112 containerd[1442]: time="2025-02-13T19:00:32.603870292Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 19:00:32.606637 containerd[1442]: time="2025-02-13T19:00:32.606392372Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:32.610673 containerd[1442]: time="2025-02-13T19:00:32.609909772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:32.610673 containerd[1442]: time="2025-02-13T19:00:32.610659412Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 666.89248ms" Feb 13 19:00:32.610782 containerd[1442]: time="2025-02-13T19:00:32.610682172Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 19:00:32.632933 containerd[1442]: time="2025-02-13T19:00:32.632879652Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:00:33.168612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3001452453.mount: Deactivated successfully. Feb 13 19:00:34.660962 containerd[1442]: time="2025-02-13T19:00:34.660908772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:34.662484 containerd[1442]: time="2025-02-13T19:00:34.662402092Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 19:00:34.663415 containerd[1442]: time="2025-02-13T19:00:34.663356572Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:34.666162 containerd[1442]: time="2025-02-13T19:00:34.666126252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:00:34.668312 containerd[1442]: time="2025-02-13T19:00:34.668001972Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.0350856s" Feb 13 19:00:34.668312 containerd[1442]: time="2025-02-13T19:00:34.668041692Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 19:00:39.576615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:00:39.586469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:00:39.683605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:00:39.688573 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:00:39.734875 kubelet[2108]: E0213 19:00:39.734797 2108 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:00:39.736984 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:00:39.737125 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:00:39.750896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:00:39.763858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:00:39.788046 systemd[1]: Reloading requested from client PID 2124 ('systemctl') (unit session-5.scope)... Feb 13 19:00:39.788062 systemd[1]: Reloading... Feb 13 19:00:39.870338 zram_generator::config[2166]: No configuration found. Feb 13 19:00:39.988397 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:00:40.045478 systemd[1]: Reloading finished in 257 ms. Feb 13 19:00:40.087717 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:00:40.087818 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:00:40.088100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:00:40.090278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:00:40.188304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:00:40.192872 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:00:40.235695 kubelet[2209]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:00:40.235695 kubelet[2209]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:00:40.235695 kubelet[2209]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:00:40.236577 kubelet[2209]: I0213 19:00:40.236533 2209 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:00:40.748019 kubelet[2209]: I0213 19:00:40.747983 2209 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:00:40.749254 kubelet[2209]: I0213 19:00:40.748154 2209 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:00:40.749254 kubelet[2209]: I0213 19:00:40.748385 2209 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:00:40.788866 kubelet[2209]: E0213 19:00:40.788805 2209 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.86:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.86:6443: connect: connection refused Feb 13 19:00:40.789225 kubelet[2209]: I0213 19:00:40.789201 2209 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:00:40.799737 kubelet[2209]: I0213 19:00:40.798665 2209 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:00:40.799737 kubelet[2209]: I0213 19:00:40.798990 2209 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:00:40.799737 kubelet[2209]: I0213 19:00:40.799017 2209 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:00:40.799737 kubelet[2209]: I0213 19:00:40.799388 2209 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:00:40.799977 kubelet[2209]: I0213 19:00:40.799398 2209 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:00:40.799977 kubelet[2209]: I0213 19:00:40.799692 2209 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:00:40.800833 kubelet[2209]: I0213 19:00:40.800801 2209 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:00:40.800833 kubelet[2209]: I0213 19:00:40.800834 2209 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:00:40.801086 kubelet[2209]: I0213 19:00:40.801068 2209 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:00:40.802610 kubelet[2209]: I0213 19:00:40.801371 2209 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:00:40.802895 kubelet[2209]: W0213 19:00:40.802747 2209 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Feb 13 19:00:40.802895 kubelet[2209]: E0213 19:00:40.802817 2209 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Feb 13 19:00:40.802895 kubelet[2209]: W0213 19:00:40.802826 2209 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.86:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Feb 13 19:00:40.802895 kubelet[2209]: E0213 19:00:40.802873 2209 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.86:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Feb 13 19:00:40.803341 kubelet[2209]: I0213 19:00:40.803318 2209 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:00:40.803745 kubelet[2209]: I0213 19:00:40.803730 2209 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:00:40.803878 kubelet[2209]: W0213 19:00:40.803863 2209 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:00:40.804897 kubelet[2209]: I0213 19:00:40.804775 2209 server.go:1264] "Started kubelet" Feb 13 19:00:40.806148 kubelet[2209]: I0213 19:00:40.805893 2209 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:00:40.806148 kubelet[2209]: I0213 19:00:40.805900 2209 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:00:40.806980 kubelet[2209]: I0213 19:00:40.806959 2209 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:00:40.807155 kubelet[2209]: I0213 19:00:40.807128 2209 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:00:40.807792 kubelet[2209]: E0213 19:00:40.807612 2209 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.86:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.86:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823d9b24676e6dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:00:40.804746972 +0000 UTC m=+0.608641481,LastTimestamp:2025-02-13 19:00:40.804746972 +0000 UTC m=+0.608641481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:00:40.811263 kubelet[2209]: I0213 19:00:40.808638 2209 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:00:40.811263 kubelet[2209]: E0213 19:00:40.809512 2209 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:00:40.811263 kubelet[2209]: I0213 19:00:40.809610 2209 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:00:40.811263 kubelet[2209]: I0213 19:00:40.809697 2209 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:00:40.811263 kubelet[2209]: I0213 19:00:40.809842 2209 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:00:40.811263 kubelet[2209]: W0213 19:00:40.810195 2209 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Feb 13 19:00:40.811263 kubelet[2209]: E0213 19:00:40.810254 2209 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Feb 13 19:00:40.811263 kubelet[2209]: E0213 19:00:40.810933 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="200ms" Feb 13 19:00:40.811263 kubelet[2209]: I0213 19:00:40.811094 2209 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:00:40.811263 kubelet[2209]: I0213 19:00:40.811197 2209 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:00:40.811551 kubelet[2209]: E0213 19:00:40.811343 2209 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:00:40.812433 kubelet[2209]: I0213 19:00:40.812415 2209 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:00:40.824300 kubelet[2209]: I0213 19:00:40.824187 2209 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:00:40.825536 kubelet[2209]: I0213 19:00:40.825508 2209 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:00:40.825685 kubelet[2209]: I0213 19:00:40.825677 2209 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:00:40.825726 kubelet[2209]: I0213 19:00:40.825701 2209 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:00:40.825758 kubelet[2209]: E0213 19:00:40.825746 2209 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:00:40.829185 kubelet[2209]: I0213 19:00:40.828962 2209 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:00:40.829185 kubelet[2209]: W0213 19:00:40.828956 2209 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Feb 13 19:00:40.829185 kubelet[2209]: I0213 19:00:40.828982 2209 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:00:40.829185 kubelet[2209]: E0213 19:00:40.829006 2209 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Feb 13 19:00:40.829185 kubelet[2209]: I0213 19:00:40.829020 2209 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:00:40.892809 kubelet[2209]: I0213 19:00:40.892777 2209 policy_none.go:49] "None policy: Start" Feb 13 19:00:40.893753 kubelet[2209]: I0213 19:00:40.893649 2209 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:00:40.893753 kubelet[2209]: I0213 19:00:40.893683 2209 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:00:40.900354 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:00:40.911311 kubelet[2209]: I0213 19:00:40.911278 2209 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:00:40.911733 kubelet[2209]: E0213 19:00:40.911684 2209 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Feb 13 19:00:40.912757 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:00:40.915707 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:00:40.923204 kubelet[2209]: I0213 19:00:40.923169 2209 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:00:40.923783 kubelet[2209]: I0213 19:00:40.923440 2209 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:00:40.923783 kubelet[2209]: I0213 19:00:40.923601 2209 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:00:40.925308 kubelet[2209]: E0213 19:00:40.925276 2209 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:00:40.926512 kubelet[2209]: I0213 19:00:40.926451 2209 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:00:40.927804 kubelet[2209]: I0213 19:00:40.927753 2209 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:00:40.929037 kubelet[2209]: I0213 19:00:40.928969 2209 topology_manager.go:215] "Topology Admit Handler" podUID="82d777c2c6e9244b11ebfaec22876740" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:00:40.937692 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 19:00:40.950107 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 19:00:40.967181 systemd[1]: Created slice kubepods-burstable-pod82d777c2c6e9244b11ebfaec22876740.slice - libcontainer container kubepods-burstable-pod82d777c2c6e9244b11ebfaec22876740.slice. Feb 13 19:00:41.011137 kubelet[2209]: I0213 19:00:41.011026 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:00:41.011137 kubelet[2209]: I0213 19:00:41.011079 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:00:41.011137 kubelet[2209]: I0213 19:00:41.011100 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82d777c2c6e9244b11ebfaec22876740-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"82d777c2c6e9244b11ebfaec22876740\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:00:41.011137 kubelet[2209]: I0213 19:00:41.011119 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:00:41.011137 kubelet[2209]: I0213 19:00:41.011136 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82d777c2c6e9244b11ebfaec22876740-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"82d777c2c6e9244b11ebfaec22876740\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:00:41.011343 kubelet[2209]: I0213 19:00:41.011162 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82d777c2c6e9244b11ebfaec22876740-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"82d777c2c6e9244b11ebfaec22876740\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:00:41.011343 kubelet[2209]: I0213 19:00:41.011179 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:00:41.011343 kubelet[2209]: I0213 19:00:41.011195 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:00:41.011343 kubelet[2209]: I0213 19:00:41.011209 2209 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:00:41.011558 kubelet[2209]: E0213 19:00:41.011510 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="400ms" Feb 13 19:00:41.112972 kubelet[2209]: I0213 19:00:41.112924 2209 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:00:41.113304 kubelet[2209]: E0213 19:00:41.113277 2209 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Feb 13 19:00:41.249563 kubelet[2209]: E0213 19:00:41.249512 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:41.250355 containerd[1442]: time="2025-02-13T19:00:41.250300532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 19:00:41.265604 kubelet[2209]: E0213 19:00:41.265507 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:41.266031 containerd[1442]: time="2025-02-13T19:00:41.265989572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 19:00:41.269649 kubelet[2209]: E0213 19:00:41.269625 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:41.270140 containerd[1442]: time="2025-02-13T19:00:41.270107092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:82d777c2c6e9244b11ebfaec22876740,Namespace:kube-system,Attempt:0,}" Feb 13 19:00:41.412969 kubelet[2209]: E0213 19:00:41.412920 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="800ms" Feb 13 19:00:41.515581 kubelet[2209]: I0213 19:00:41.515540 2209 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:00:41.516038 kubelet[2209]: E0213 19:00:41.515946 2209 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Feb 13 19:00:41.710274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount170527750.mount: Deactivated successfully. Feb 13 19:00:41.716119 containerd[1442]: time="2025-02-13T19:00:41.716069972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:00:41.718281 containerd[1442]: time="2025-02-13T19:00:41.718128212Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:00:41.719187 containerd[1442]: time="2025-02-13T19:00:41.719114292Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:00:41.720038 containerd[1442]: time="2025-02-13T19:00:41.719992252Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:00:41.720806 containerd[1442]: time="2025-02-13T19:00:41.720765092Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:00:41.721842 containerd[1442]: time="2025-02-13T19:00:41.721779612Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:00:41.722179 containerd[1442]: time="2025-02-13T19:00:41.722157572Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:00:41.724097 containerd[1442]: time="2025-02-13T19:00:41.724057212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:00:41.726431 containerd[1442]: time="2025-02-13T19:00:41.726402652Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 456.21584ms" Feb 13 19:00:41.727197 containerd[1442]: time="2025-02-13T19:00:41.727035812Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 476.63932ms" Feb 13 19:00:41.730507 containerd[1442]: time="2025-02-13T19:00:41.730426212Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 464.35344ms" Feb 13 19:00:41.872052 containerd[1442]: time="2025-02-13T19:00:41.871799212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:00:41.872052 containerd[1442]: time="2025-02-13T19:00:41.871883412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:00:41.872052 containerd[1442]: time="2025-02-13T19:00:41.871897212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:41.872250 containerd[1442]: time="2025-02-13T19:00:41.871979132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:41.876244 containerd[1442]: time="2025-02-13T19:00:41.874763452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:00:41.876244 containerd[1442]: time="2025-02-13T19:00:41.875784052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:00:41.876244 containerd[1442]: time="2025-02-13T19:00:41.875798332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:41.876244 containerd[1442]: time="2025-02-13T19:00:41.875893012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:41.879845 containerd[1442]: time="2025-02-13T19:00:41.879409412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:00:41.879845 containerd[1442]: time="2025-02-13T19:00:41.879761452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:00:41.879845 containerd[1442]: time="2025-02-13T19:00:41.879782092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:41.880717 containerd[1442]: time="2025-02-13T19:00:41.880645172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:00:41.901450 systemd[1]: Started cri-containerd-511db0e24d273eb73107190d83830192e8966074ab14cc920cb7251fcba159e6.scope - libcontainer container 511db0e24d273eb73107190d83830192e8966074ab14cc920cb7251fcba159e6. Feb 13 19:00:41.906663 systemd[1]: Started cri-containerd-328d526fa71acc22043791406db190e93be581eec03ef5cfdb0c674f072420f4.scope - libcontainer container 328d526fa71acc22043791406db190e93be581eec03ef5cfdb0c674f072420f4. Feb 13 19:00:41.907962 systemd[1]: Started cri-containerd-a1f403bfaceed09d5aab5148d2d86282846ea5debdde8a30e92c5544de8407a9.scope - libcontainer container a1f403bfaceed09d5aab5148d2d86282846ea5debdde8a30e92c5544de8407a9. Feb 13 19:00:41.944955 containerd[1442]: time="2025-02-13T19:00:41.944806972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"328d526fa71acc22043791406db190e93be581eec03ef5cfdb0c674f072420f4\"" Feb 13 19:00:41.945445 containerd[1442]: time="2025-02-13T19:00:41.945358812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"511db0e24d273eb73107190d83830192e8966074ab14cc920cb7251fcba159e6\"" Feb 13 19:00:41.947370 kubelet[2209]: E0213 19:00:41.947339 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:41.947732 kubelet[2209]: E0213 19:00:41.947415 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:41.950569 containerd[1442]: time="2025-02-13T19:00:41.950509572Z" level=info msg="CreateContainer within sandbox \"328d526fa71acc22043791406db190e93be581eec03ef5cfdb0c674f072420f4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:00:41.950854 containerd[1442]: time="2025-02-13T19:00:41.950684612Z" level=info msg="CreateContainer within sandbox \"511db0e24d273eb73107190d83830192e8966074ab14cc920cb7251fcba159e6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:00:41.953639 containerd[1442]: time="2025-02-13T19:00:41.953502092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:82d777c2c6e9244b11ebfaec22876740,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1f403bfaceed09d5aab5148d2d86282846ea5debdde8a30e92c5544de8407a9\"" Feb 13 19:00:41.955268 kubelet[2209]: E0213 19:00:41.955221 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:41.956986 kubelet[2209]: W0213 19:00:41.956925 2209 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Feb 13 19:00:41.956986 kubelet[2209]: E0213 19:00:41.956984 2209 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Feb 13 19:00:41.958042 containerd[1442]: time="2025-02-13T19:00:41.957925372Z" level=info msg="CreateContainer within sandbox \"a1f403bfaceed09d5aab5148d2d86282846ea5debdde8a30e92c5544de8407a9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:00:41.970209 containerd[1442]: time="2025-02-13T19:00:41.970164652Z" level=info msg="CreateContainer within sandbox \"328d526fa71acc22043791406db190e93be581eec03ef5cfdb0c674f072420f4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f67b5c23b88a7e7be845f8e67ad7d059be6591b9877e0102bc6cd24d74baa0e8\"" Feb 13 19:00:41.970600 containerd[1442]: time="2025-02-13T19:00:41.970568132Z" level=info msg="CreateContainer within sandbox \"511db0e24d273eb73107190d83830192e8966074ab14cc920cb7251fcba159e6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5bec9b7aa118d6ee090f1035cae8b808e9c9e865f0b39dc8c1aa9dc32723b3fe\"" Feb 13 19:00:41.971664 containerd[1442]: time="2025-02-13T19:00:41.971359452Z" level=info msg="StartContainer for \"5bec9b7aa118d6ee090f1035cae8b808e9c9e865f0b39dc8c1aa9dc32723b3fe\"" Feb 13 19:00:41.971761 containerd[1442]: time="2025-02-13T19:00:41.971721092Z" level=info msg="StartContainer for \"f67b5c23b88a7e7be845f8e67ad7d059be6591b9877e0102bc6cd24d74baa0e8\"" Feb 13 19:00:41.984163 containerd[1442]: time="2025-02-13T19:00:41.984105252Z" level=info msg="CreateContainer within sandbox \"a1f403bfaceed09d5aab5148d2d86282846ea5debdde8a30e92c5544de8407a9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c419404195a46b8f3a46408a5c871ec4d4f678603e2b6812ca2ea976315a9a46\"" Feb 13 19:00:41.984743 containerd[1442]: time="2025-02-13T19:00:41.984718772Z" level=info msg="StartContainer for \"c419404195a46b8f3a46408a5c871ec4d4f678603e2b6812ca2ea976315a9a46\"" Feb 13 19:00:41.996476 systemd[1]: Started cri-containerd-f67b5c23b88a7e7be845f8e67ad7d059be6591b9877e0102bc6cd24d74baa0e8.scope - libcontainer container f67b5c23b88a7e7be845f8e67ad7d059be6591b9877e0102bc6cd24d74baa0e8. Feb 13 19:00:42.000430 systemd[1]: Started cri-containerd-5bec9b7aa118d6ee090f1035cae8b808e9c9e865f0b39dc8c1aa9dc32723b3fe.scope - libcontainer container 5bec9b7aa118d6ee090f1035cae8b808e9c9e865f0b39dc8c1aa9dc32723b3fe. Feb 13 19:00:42.018413 systemd[1]: Started cri-containerd-c419404195a46b8f3a46408a5c871ec4d4f678603e2b6812ca2ea976315a9a46.scope - libcontainer container c419404195a46b8f3a46408a5c871ec4d4f678603e2b6812ca2ea976315a9a46. Feb 13 19:00:42.054840 containerd[1442]: time="2025-02-13T19:00:42.054739612Z" level=info msg="StartContainer for \"5bec9b7aa118d6ee090f1035cae8b808e9c9e865f0b39dc8c1aa9dc32723b3fe\" returns successfully" Feb 13 19:00:42.054840 containerd[1442]: time="2025-02-13T19:00:42.054734732Z" level=info msg="StartContainer for \"f67b5c23b88a7e7be845f8e67ad7d059be6591b9877e0102bc6cd24d74baa0e8\" returns successfully" Feb 13 19:00:42.069430 containerd[1442]: time="2025-02-13T19:00:42.069336412Z" level=info msg="StartContainer for \"c419404195a46b8f3a46408a5c871ec4d4f678603e2b6812ca2ea976315a9a46\" returns successfully" Feb 13 19:00:42.214295 kubelet[2209]: E0213 19:00:42.214141 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="1.6s" Feb 13 19:00:42.271038 kubelet[2209]: W0213 19:00:42.270938 2209 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Feb 13 19:00:42.271038 kubelet[2209]: E0213 19:00:42.271013 2209 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Feb 13 19:00:42.318348 kubelet[2209]: I0213 19:00:42.318013 2209 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:00:42.838695 kubelet[2209]: E0213 19:00:42.838658 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:42.839025 kubelet[2209]: E0213 19:00:42.838998 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:42.840741 kubelet[2209]: E0213 19:00:42.840716 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:43.818779 kubelet[2209]: E0213 19:00:43.818742 2209 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:00:43.842906 kubelet[2209]: E0213 19:00:43.842876 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:43.945004 kubelet[2209]: I0213 19:00:43.944946 2209 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:00:43.957760 kubelet[2209]: E0213 19:00:43.957720 2209 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:00:44.058099 kubelet[2209]: E0213 19:00:44.058055 2209 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:00:44.158683 kubelet[2209]: E0213 19:00:44.158558 2209 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:00:44.259317 kubelet[2209]: E0213 19:00:44.259221 2209 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:00:44.360116 kubelet[2209]: E0213 19:00:44.360067 2209 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:00:44.460708 kubelet[2209]: E0213 19:00:44.460598 2209 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:00:44.561124 kubelet[2209]: E0213 19:00:44.561076 2209 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:00:44.661635 kubelet[2209]: E0213 19:00:44.661585 2209 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:00:44.762331 kubelet[2209]: E0213 19:00:44.762199 2209 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:00:44.805272 kubelet[2209]: I0213 19:00:44.805216 2209 apiserver.go:52] "Watching apiserver" Feb 13 19:00:44.810659 kubelet[2209]: I0213 19:00:44.810621 2209 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:00:46.162859 systemd[1]: Reloading requested from client PID 2492 ('systemctl') (unit session-5.scope)... Feb 13 19:00:46.162874 systemd[1]: Reloading... Feb 13 19:00:46.232279 zram_generator::config[2534]: No configuration found. Feb 13 19:00:46.315380 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:00:46.382434 systemd[1]: Reloading finished in 219 ms. Feb 13 19:00:46.417973 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:00:46.422942 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:00:46.423194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:00:46.439534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:00:46.532887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:00:46.539394 (kubelet)[2573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:00:46.578091 kubelet[2573]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:00:46.578091 kubelet[2573]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:00:46.578091 kubelet[2573]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:00:46.578504 kubelet[2573]: I0213 19:00:46.578127 2573 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:00:46.582269 kubelet[2573]: I0213 19:00:46.582159 2573 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:00:46.582269 kubelet[2573]: I0213 19:00:46.582187 2573 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:00:46.582433 kubelet[2573]: I0213 19:00:46.582410 2573 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:00:46.583791 kubelet[2573]: I0213 19:00:46.583763 2573 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:00:46.585199 kubelet[2573]: I0213 19:00:46.585166 2573 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:00:46.592827 kubelet[2573]: I0213 19:00:46.592792 2573 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:00:46.593020 kubelet[2573]: I0213 19:00:46.592991 2573 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:00:46.593199 kubelet[2573]: I0213 19:00:46.593020 2573 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:00:46.593283 kubelet[2573]: I0213 19:00:46.593207 2573 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:00:46.593283 kubelet[2573]: I0213 19:00:46.593216 2573 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:00:46.593283 kubelet[2573]: I0213 19:00:46.593274 2573 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:00:46.594300 kubelet[2573]: I0213 19:00:46.593380 2573 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:00:46.594300 kubelet[2573]: I0213 19:00:46.593398 2573 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:00:46.594300 kubelet[2573]: I0213 19:00:46.593426 2573 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:00:46.594300 kubelet[2573]: I0213 19:00:46.593453 2573 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:00:46.594300 kubelet[2573]: I0213 19:00:46.594189 2573 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:00:46.594476 kubelet[2573]: I0213 19:00:46.594400 2573 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:00:46.596521 kubelet[2573]: I0213 19:00:46.594945 2573 server.go:1264] "Started kubelet" Feb 13 19:00:46.596521 kubelet[2573]: I0213 19:00:46.595084 2573 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:00:46.596521 kubelet[2573]: I0213 19:00:46.595127 2573 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:00:46.596521 kubelet[2573]: I0213 19:00:46.595961 2573 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:00:46.597026 kubelet[2573]: I0213 19:00:46.596755 2573 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:00:46.602337 kubelet[2573]: I0213 19:00:46.599380 2573 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:00:46.602337 kubelet[2573]: I0213 19:00:46.599997 2573 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:00:46.602337 kubelet[2573]: I0213 19:00:46.600195 2573 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:00:46.602337 kubelet[2573]: I0213 19:00:46.602294 2573 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:00:46.616251 kubelet[2573]: E0213 19:00:46.616188 2573 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:00:46.616551 kubelet[2573]: I0213 19:00:46.616520 2573 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:00:46.617364 kubelet[2573]: I0213 19:00:46.617279 2573 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:00:46.620302 kubelet[2573]: I0213 19:00:46.620152 2573 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:00:46.621026 kubelet[2573]: I0213 19:00:46.620982 2573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:00:46.623528 kubelet[2573]: I0213 19:00:46.623035 2573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:00:46.623528 kubelet[2573]: I0213 19:00:46.623084 2573 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:00:46.623528 kubelet[2573]: I0213 19:00:46.623114 2573 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:00:46.623528 kubelet[2573]: E0213 19:00:46.623166 2573 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:00:46.656776 kubelet[2573]: I0213 19:00:46.656733 2573 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:00:46.656776 kubelet[2573]: I0213 19:00:46.656751 2573 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:00:46.656776 kubelet[2573]: I0213 19:00:46.656772 2573 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:00:46.656990 kubelet[2573]: I0213 19:00:46.656923 2573 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:00:46.656990 kubelet[2573]: I0213 19:00:46.656936 2573 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:00:46.656990 kubelet[2573]: I0213 19:00:46.656952 2573 policy_none.go:49] "None policy: Start" Feb 13 19:00:46.657553 kubelet[2573]: I0213 19:00:46.657535 2573 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:00:46.657553 kubelet[2573]: I0213 19:00:46.657557 2573 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:00:46.657693 kubelet[2573]: I0213 19:00:46.657679 2573 state_mem.go:75] "Updated machine memory state" Feb 13 19:00:46.661898 kubelet[2573]: I0213 19:00:46.661866 2573 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:00:46.662249 kubelet[2573]: I0213 19:00:46.662051 2573 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:00:46.662249 kubelet[2573]: I0213 19:00:46.662177 2573 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:00:46.704065 kubelet[2573]: I0213 19:00:46.703952 2573 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:00:46.715507 kubelet[2573]: I0213 19:00:46.715431 2573 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 19:00:46.715638 kubelet[2573]: I0213 19:00:46.715534 2573 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:00:46.724160 kubelet[2573]: I0213 19:00:46.724105 2573 topology_manager.go:215] "Topology Admit Handler" podUID="82d777c2c6e9244b11ebfaec22876740" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:00:46.724319 kubelet[2573]: I0213 19:00:46.724247 2573 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:00:46.724319 kubelet[2573]: I0213 19:00:46.724289 2573 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:00:46.902194 kubelet[2573]: I0213 19:00:46.902141 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:00:46.902194 kubelet[2573]: I0213 19:00:46.902197 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:00:46.902360 kubelet[2573]: I0213 19:00:46.902220 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:00:46.902360 kubelet[2573]: I0213 19:00:46.902257 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82d777c2c6e9244b11ebfaec22876740-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"82d777c2c6e9244b11ebfaec22876740\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:00:46.902360 kubelet[2573]: I0213 19:00:46.902273 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:00:46.902360 kubelet[2573]: I0213 19:00:46.902289 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:00:46.902360 kubelet[2573]: I0213 19:00:46.902303 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82d777c2c6e9244b11ebfaec22876740-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"82d777c2c6e9244b11ebfaec22876740\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:00:46.902479 kubelet[2573]: I0213 19:00:46.902321 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82d777c2c6e9244b11ebfaec22876740-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"82d777c2c6e9244b11ebfaec22876740\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:00:46.902479 kubelet[2573]: I0213 19:00:46.902336 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:00:47.041287 kubelet[2573]: E0213 19:00:47.041161 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:47.041391 kubelet[2573]: E0213 19:00:47.041341 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:47.041939 kubelet[2573]: E0213 19:00:47.041804 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:47.594687 kubelet[2573]: I0213 19:00:47.594640 2573 apiserver.go:52] "Watching apiserver" Feb 13 19:00:47.601036 kubelet[2573]: I0213 19:00:47.601001 2573 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:00:47.642397 kubelet[2573]: E0213 19:00:47.640894 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:47.642397 kubelet[2573]: E0213 19:00:47.640915 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:47.642397 kubelet[2573]: E0213 19:00:47.641412 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:47.698848 kubelet[2573]: I0213 19:00:47.698641 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.698620332 podStartE2EDuration="1.698620332s" podCreationTimestamp="2025-02-13 19:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:00:47.687589332 +0000 UTC m=+1.144876681" watchObservedRunningTime="2025-02-13 19:00:47.698620332 +0000 UTC m=+1.155907681" Feb 13 19:00:47.698848 kubelet[2573]: I0213 19:00:47.698768 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6987631319999998 podStartE2EDuration="1.698763132s" podCreationTimestamp="2025-02-13 19:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:00:47.698511052 +0000 UTC m=+1.155798401" watchObservedRunningTime="2025-02-13 19:00:47.698763132 +0000 UTC m=+1.156050481" Feb 13 19:00:47.708509 kubelet[2573]: I0213 19:00:47.708436 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.708421972 podStartE2EDuration="1.708421972s" podCreationTimestamp="2025-02-13 19:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:00:47.708296332 +0000 UTC m=+1.165583681" watchObservedRunningTime="2025-02-13 19:00:47.708421972 +0000 UTC m=+1.165709281" Feb 13 19:00:48.028480 sudo[1578]: pam_unix(sudo:session): session closed for user root Feb 13 19:00:48.030049 sshd[1577]: Connection closed by 10.0.0.1 port 59714 Feb 13 19:00:48.030366 sshd-session[1575]: pam_unix(sshd:session): session closed for user core Feb 13 19:00:48.034635 systemd[1]: sshd@4-10.0.0.86:22-10.0.0.1:59714.service: Deactivated successfully. Feb 13 19:00:48.037118 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:00:48.037650 systemd[1]: session-5.scope: Consumed 6.596s CPU time, 193.2M memory peak, 0B memory swap peak. Feb 13 19:00:48.038609 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:00:48.039559 systemd-logind[1424]: Removed session 5. Feb 13 19:00:48.642614 kubelet[2573]: E0213 19:00:48.642585 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:51.088569 kubelet[2573]: E0213 19:00:51.088528 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:51.289807 kubelet[2573]: E0213 19:00:51.289773 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:54.243978 kubelet[2573]: E0213 19:00:54.243948 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:54.651025 kubelet[2573]: E0213 19:00:54.650258 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:00:59.330882 kubelet[2573]: I0213 19:00:59.330847 2573 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:00:59.331541 kubelet[2573]: I0213 19:00:59.331367 2573 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:00:59.331587 containerd[1442]: time="2025-02-13T19:00:59.331183342Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:01:00.421509 kubelet[2573]: I0213 19:01:00.421446 2573 topology_manager.go:215] "Topology Admit Handler" podUID="474e5997-4d7f-4fa9-a635-d71764c96015" podNamespace="kube-system" podName="kube-proxy-4pzhn" Feb 13 19:01:00.428078 systemd[1]: Created slice kubepods-besteffort-pod474e5997_4d7f_4fa9_a635_d71764c96015.slice - libcontainer container kubepods-besteffort-pod474e5997_4d7f_4fa9_a635_d71764c96015.slice. Feb 13 19:01:00.432675 kubelet[2573]: I0213 19:01:00.432407 2573 topology_manager.go:215] "Topology Admit Handler" podUID="ca79753f-4910-4a82-8d30-608cfbeebc72" podNamespace="kube-flannel" podName="kube-flannel-ds-n6xsv" Feb 13 19:01:00.448390 systemd[1]: Created slice kubepods-burstable-podca79753f_4910_4a82_8d30_608cfbeebc72.slice - libcontainer container kubepods-burstable-podca79753f_4910_4a82_8d30_608cfbeebc72.slice. Feb 13 19:01:00.500103 kubelet[2573]: I0213 19:01:00.499279 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d295x\" (UniqueName: \"kubernetes.io/projected/474e5997-4d7f-4fa9-a635-d71764c96015-kube-api-access-d295x\") pod \"kube-proxy-4pzhn\" (UID: \"474e5997-4d7f-4fa9-a635-d71764c96015\") " pod="kube-system/kube-proxy-4pzhn" Feb 13 19:01:00.500103 kubelet[2573]: I0213 19:01:00.499761 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/ca79753f-4910-4a82-8d30-608cfbeebc72-cni-plugin\") pod \"kube-flannel-ds-n6xsv\" (UID: \"ca79753f-4910-4a82-8d30-608cfbeebc72\") " pod="kube-flannel/kube-flannel-ds-n6xsv" Feb 13 19:01:00.500103 kubelet[2573]: I0213 19:01:00.499881 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca79753f-4910-4a82-8d30-608cfbeebc72-xtables-lock\") pod \"kube-flannel-ds-n6xsv\" (UID: \"ca79753f-4910-4a82-8d30-608cfbeebc72\") " pod="kube-flannel/kube-flannel-ds-n6xsv" Feb 13 19:01:00.500103 kubelet[2573]: I0213 19:01:00.499914 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/474e5997-4d7f-4fa9-a635-d71764c96015-lib-modules\") pod \"kube-proxy-4pzhn\" (UID: \"474e5997-4d7f-4fa9-a635-d71764c96015\") " pod="kube-system/kube-proxy-4pzhn" Feb 13 19:01:00.500103 kubelet[2573]: I0213 19:01:00.499936 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/474e5997-4d7f-4fa9-a635-d71764c96015-kube-proxy\") pod \"kube-proxy-4pzhn\" (UID: \"474e5997-4d7f-4fa9-a635-d71764c96015\") " pod="kube-system/kube-proxy-4pzhn" Feb 13 19:01:00.500391 kubelet[2573]: I0213 19:01:00.499951 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ca79753f-4910-4a82-8d30-608cfbeebc72-run\") pod \"kube-flannel-ds-n6xsv\" (UID: \"ca79753f-4910-4a82-8d30-608cfbeebc72\") " pod="kube-flannel/kube-flannel-ds-n6xsv" Feb 13 19:01:00.500391 kubelet[2573]: I0213 19:01:00.499968 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/474e5997-4d7f-4fa9-a635-d71764c96015-xtables-lock\") pod \"kube-proxy-4pzhn\" (UID: \"474e5997-4d7f-4fa9-a635-d71764c96015\") " pod="kube-system/kube-proxy-4pzhn" Feb 13 19:01:00.500391 kubelet[2573]: I0213 19:01:00.499983 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/ca79753f-4910-4a82-8d30-608cfbeebc72-cni\") pod \"kube-flannel-ds-n6xsv\" (UID: \"ca79753f-4910-4a82-8d30-608cfbeebc72\") " pod="kube-flannel/kube-flannel-ds-n6xsv" Feb 13 19:01:00.500391 kubelet[2573]: I0213 19:01:00.499999 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/ca79753f-4910-4a82-8d30-608cfbeebc72-flannel-cfg\") pod \"kube-flannel-ds-n6xsv\" (UID: \"ca79753f-4910-4a82-8d30-608cfbeebc72\") " pod="kube-flannel/kube-flannel-ds-n6xsv" Feb 13 19:01:00.500391 kubelet[2573]: I0213 19:01:00.500015 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q28x8\" (UniqueName: \"kubernetes.io/projected/ca79753f-4910-4a82-8d30-608cfbeebc72-kube-api-access-q28x8\") pod \"kube-flannel-ds-n6xsv\" (UID: \"ca79753f-4910-4a82-8d30-608cfbeebc72\") " pod="kube-flannel/kube-flannel-ds-n6xsv" Feb 13 19:01:00.739036 kubelet[2573]: E0213 19:01:00.738932 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:00.739921 containerd[1442]: time="2025-02-13T19:01:00.739869909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4pzhn,Uid:474e5997-4d7f-4fa9-a635-d71764c96015,Namespace:kube-system,Attempt:0,}" Feb 13 19:01:00.752708 kubelet[2573]: E0213 19:01:00.752641 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:00.753225 containerd[1442]: time="2025-02-13T19:01:00.753079142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-n6xsv,Uid:ca79753f-4910-4a82-8d30-608cfbeebc72,Namespace:kube-flannel,Attempt:0,}" Feb 13 19:01:00.769546 containerd[1442]: time="2025-02-13T19:01:00.769155981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:01:00.769546 containerd[1442]: time="2025-02-13T19:01:00.769208701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:01:00.769546 containerd[1442]: time="2025-02-13T19:01:00.769225101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:00.769546 containerd[1442]: time="2025-02-13T19:01:00.769308421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:00.793481 systemd[1]: Started cri-containerd-314b0ea47e6575aa78c670a21774746db935c262b8dbcef624711c166b35dc68.scope - libcontainer container 314b0ea47e6575aa78c670a21774746db935c262b8dbcef624711c166b35dc68. Feb 13 19:01:00.798313 containerd[1442]: time="2025-02-13T19:01:00.797582091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:01:00.798313 containerd[1442]: time="2025-02-13T19:01:00.797678931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:01:00.798313 containerd[1442]: time="2025-02-13T19:01:00.797692771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:00.798313 containerd[1442]: time="2025-02-13T19:01:00.797767451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:00.815515 systemd[1]: Started cri-containerd-f38b3328f490f1f6865f425d0f43e4a457400f074c9bcf812422f22b533aee95.scope - libcontainer container f38b3328f490f1f6865f425d0f43e4a457400f074c9bcf812422f22b533aee95. Feb 13 19:01:00.819962 containerd[1442]: time="2025-02-13T19:01:00.819583865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4pzhn,Uid:474e5997-4d7f-4fa9-a635-d71764c96015,Namespace:kube-system,Attempt:0,} returns sandbox id \"314b0ea47e6575aa78c670a21774746db935c262b8dbcef624711c166b35dc68\"" Feb 13 19:01:00.820406 kubelet[2573]: E0213 19:01:00.820381 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:00.824339 containerd[1442]: time="2025-02-13T19:01:00.824290276Z" level=info msg="CreateContainer within sandbox \"314b0ea47e6575aa78c670a21774746db935c262b8dbcef624711c166b35dc68\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:01:00.838756 containerd[1442]: time="2025-02-13T19:01:00.838709592Z" level=info msg="CreateContainer within sandbox \"314b0ea47e6575aa78c670a21774746db935c262b8dbcef624711c166b35dc68\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5d9b70e962d148200231110327cc8c89ef11e5b525dcfb79bdb918c26fe6573c\"" Feb 13 19:01:00.841072 containerd[1442]: time="2025-02-13T19:01:00.839501194Z" level=info msg="StartContainer for \"5d9b70e962d148200231110327cc8c89ef11e5b525dcfb79bdb918c26fe6573c\"" Feb 13 19:01:00.855894 containerd[1442]: time="2025-02-13T19:01:00.855832794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-n6xsv,Uid:ca79753f-4910-4a82-8d30-608cfbeebc72,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"f38b3328f490f1f6865f425d0f43e4a457400f074c9bcf812422f22b533aee95\"" Feb 13 19:01:00.856717 kubelet[2573]: E0213 19:01:00.856698 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:00.858354 containerd[1442]: time="2025-02-13T19:01:00.858323120Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 19:01:00.881479 systemd[1]: Started cri-containerd-5d9b70e962d148200231110327cc8c89ef11e5b525dcfb79bdb918c26fe6573c.scope - libcontainer container 5d9b70e962d148200231110327cc8c89ef11e5b525dcfb79bdb918c26fe6573c. Feb 13 19:01:00.909573 containerd[1442]: time="2025-02-13T19:01:00.909444845Z" level=info msg="StartContainer for \"5d9b70e962d148200231110327cc8c89ef11e5b525dcfb79bdb918c26fe6573c\" returns successfully" Feb 13 19:01:01.097258 kubelet[2573]: E0213 19:01:01.096268 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:01.299734 kubelet[2573]: E0213 19:01:01.299702 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:01.655667 update_engine[1431]: I20250213 19:01:01.655589 1431 update_attempter.cc:509] Updating boot flags... Feb 13 19:01:01.667891 kubelet[2573]: E0213 19:01:01.667821 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:01.673298 kubelet[2573]: E0213 19:01:01.669645 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:01.682261 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2884) Feb 13 19:01:01.691217 kubelet[2573]: I0213 19:01:01.690907 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4pzhn" podStartSLOduration=1.690889978 podStartE2EDuration="1.690889978s" podCreationTimestamp="2025-02-13 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:01:01.690743777 +0000 UTC m=+15.148031166" watchObservedRunningTime="2025-02-13 19:01:01.690889978 +0000 UTC m=+15.148177287" Feb 13 19:01:02.271691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3821464364.mount: Deactivated successfully. Feb 13 19:01:02.394416 containerd[1442]: time="2025-02-13T19:01:02.393395858Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:02.394416 containerd[1442]: time="2025-02-13T19:01:02.393710179Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673531" Feb 13 19:01:02.394977 containerd[1442]: time="2025-02-13T19:01:02.394937222Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:02.397770 containerd[1442]: time="2025-02-13T19:01:02.397738068Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:02.398920 containerd[1442]: time="2025-02-13T19:01:02.398883390Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.54052427s" Feb 13 19:01:02.398920 containerd[1442]: time="2025-02-13T19:01:02.398917110Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 19:01:02.401537 containerd[1442]: time="2025-02-13T19:01:02.401507716Z" level=info msg="CreateContainer within sandbox \"f38b3328f490f1f6865f425d0f43e4a457400f074c9bcf812422f22b533aee95\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 19:01:02.414375 containerd[1442]: time="2025-02-13T19:01:02.414333784Z" level=info msg="CreateContainer within sandbox \"f38b3328f490f1f6865f425d0f43e4a457400f074c9bcf812422f22b533aee95\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"71fbc9387c049124ef7ca29c11b87d873a19809e7e6f6601b135418bd0934858\"" Feb 13 19:01:02.414854 containerd[1442]: time="2025-02-13T19:01:02.414824505Z" level=info msg="StartContainer for \"71fbc9387c049124ef7ca29c11b87d873a19809e7e6f6601b135418bd0934858\"" Feb 13 19:01:02.443431 systemd[1]: Started cri-containerd-71fbc9387c049124ef7ca29c11b87d873a19809e7e6f6601b135418bd0934858.scope - libcontainer container 71fbc9387c049124ef7ca29c11b87d873a19809e7e6f6601b135418bd0934858. Feb 13 19:01:02.464737 systemd[1]: cri-containerd-71fbc9387c049124ef7ca29c11b87d873a19809e7e6f6601b135418bd0934858.scope: Deactivated successfully. Feb 13 19:01:02.468183 containerd[1442]: time="2025-02-13T19:01:02.468152220Z" level=info msg="StartContainer for \"71fbc9387c049124ef7ca29c11b87d873a19809e7e6f6601b135418bd0934858\" returns successfully" Feb 13 19:01:02.498286 containerd[1442]: time="2025-02-13T19:01:02.498137324Z" level=info msg="shim disconnected" id=71fbc9387c049124ef7ca29c11b87d873a19809e7e6f6601b135418bd0934858 namespace=k8s.io Feb 13 19:01:02.498286 containerd[1442]: time="2025-02-13T19:01:02.498193604Z" level=warning msg="cleaning up after shim disconnected" id=71fbc9387c049124ef7ca29c11b87d873a19809e7e6f6601b135418bd0934858 namespace=k8s.io Feb 13 19:01:02.498286 containerd[1442]: time="2025-02-13T19:01:02.498201324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:01:02.673962 kubelet[2573]: E0213 19:01:02.673936 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:02.674870 containerd[1442]: time="2025-02-13T19:01:02.674839145Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 19:01:04.172417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166669578.mount: Deactivated successfully. Feb 13 19:01:05.773022 containerd[1442]: time="2025-02-13T19:01:05.772724059Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Feb 13 19:01:05.778683 containerd[1442]: time="2025-02-13T19:01:05.778622710Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.103740564s" Feb 13 19:01:05.778683 containerd[1442]: time="2025-02-13T19:01:05.778668030Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 19:01:05.789252 containerd[1442]: time="2025-02-13T19:01:05.788445127Z" level=info msg="CreateContainer within sandbox \"f38b3328f490f1f6865f425d0f43e4a457400f074c9bcf812422f22b533aee95\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:01:05.802068 containerd[1442]: time="2025-02-13T19:01:05.802005431Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:05.803146 containerd[1442]: time="2025-02-13T19:01:05.803110353Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:05.804064 containerd[1442]: time="2025-02-13T19:01:05.804033955Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:05.840152 containerd[1442]: time="2025-02-13T19:01:05.840096219Z" level=info msg="CreateContainer within sandbox \"f38b3328f490f1f6865f425d0f43e4a457400f074c9bcf812422f22b533aee95\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"29b053b71396decd7ec9c5b4f2201b1a9f7a9ac3ffc29bcf4058d62ec47409cf\"" Feb 13 19:01:05.842267 containerd[1442]: time="2025-02-13T19:01:05.842102782Z" level=info msg="StartContainer for \"29b053b71396decd7ec9c5b4f2201b1a9f7a9ac3ffc29bcf4058d62ec47409cf\"" Feb 13 19:01:05.874463 systemd[1]: Started cri-containerd-29b053b71396decd7ec9c5b4f2201b1a9f7a9ac3ffc29bcf4058d62ec47409cf.scope - libcontainer container 29b053b71396decd7ec9c5b4f2201b1a9f7a9ac3ffc29bcf4058d62ec47409cf. Feb 13 19:01:05.906990 containerd[1442]: time="2025-02-13T19:01:05.906342297Z" level=info msg="StartContainer for \"29b053b71396decd7ec9c5b4f2201b1a9f7a9ac3ffc29bcf4058d62ec47409cf\" returns successfully" Feb 13 19:01:05.908994 systemd[1]: cri-containerd-29b053b71396decd7ec9c5b4f2201b1a9f7a9ac3ffc29bcf4058d62ec47409cf.scope: Deactivated successfully. Feb 13 19:01:05.926250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29b053b71396decd7ec9c5b4f2201b1a9f7a9ac3ffc29bcf4058d62ec47409cf-rootfs.mount: Deactivated successfully. Feb 13 19:01:06.026702 kubelet[2573]: I0213 19:01:06.026058 2573 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:01:06.028621 containerd[1442]: time="2025-02-13T19:01:06.026713268Z" level=info msg="shim disconnected" id=29b053b71396decd7ec9c5b4f2201b1a9f7a9ac3ffc29bcf4058d62ec47409cf namespace=k8s.io Feb 13 19:01:06.028621 containerd[1442]: time="2025-02-13T19:01:06.026768228Z" level=warning msg="cleaning up after shim disconnected" id=29b053b71396decd7ec9c5b4f2201b1a9f7a9ac3ffc29bcf4058d62ec47409cf namespace=k8s.io Feb 13 19:01:06.028621 containerd[1442]: time="2025-02-13T19:01:06.026775948Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:01:06.055579 kubelet[2573]: I0213 19:01:06.055531 2573 topology_manager.go:215] "Topology Admit Handler" podUID="f8a81e99-145d-4fc4-96b4-34ed3f664fe3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4t9p6" Feb 13 19:01:06.055848 kubelet[2573]: I0213 19:01:06.055833 2573 topology_manager.go:215] "Topology Admit Handler" podUID="aabade59-19d2-4e3c-a99c-efa06485b1e0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-h8mrc" Feb 13 19:01:06.066000 systemd[1]: Created slice kubepods-burstable-podaabade59_19d2_4e3c_a99c_efa06485b1e0.slice - libcontainer container kubepods-burstable-podaabade59_19d2_4e3c_a99c_efa06485b1e0.slice. Feb 13 19:01:06.074623 systemd[1]: Created slice kubepods-burstable-podf8a81e99_145d_4fc4_96b4_34ed3f664fe3.slice - libcontainer container kubepods-burstable-podf8a81e99_145d_4fc4_96b4_34ed3f664fe3.slice. Feb 13 19:01:06.140661 kubelet[2573]: I0213 19:01:06.140618 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmbvm\" (UniqueName: \"kubernetes.io/projected/aabade59-19d2-4e3c-a99c-efa06485b1e0-kube-api-access-rmbvm\") pod \"coredns-7db6d8ff4d-h8mrc\" (UID: \"aabade59-19d2-4e3c-a99c-efa06485b1e0\") " pod="kube-system/coredns-7db6d8ff4d-h8mrc" Feb 13 19:01:06.140951 kubelet[2573]: I0213 19:01:06.140770 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aabade59-19d2-4e3c-a99c-efa06485b1e0-config-volume\") pod \"coredns-7db6d8ff4d-h8mrc\" (UID: \"aabade59-19d2-4e3c-a99c-efa06485b1e0\") " pod="kube-system/coredns-7db6d8ff4d-h8mrc" Feb 13 19:01:06.140951 kubelet[2573]: I0213 19:01:06.140800 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc9hf\" (UniqueName: \"kubernetes.io/projected/f8a81e99-145d-4fc4-96b4-34ed3f664fe3-kube-api-access-xc9hf\") pod \"coredns-7db6d8ff4d-4t9p6\" (UID: \"f8a81e99-145d-4fc4-96b4-34ed3f664fe3\") " pod="kube-system/coredns-7db6d8ff4d-4t9p6" Feb 13 19:01:06.140951 kubelet[2573]: I0213 19:01:06.140819 2573 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8a81e99-145d-4fc4-96b4-34ed3f664fe3-config-volume\") pod \"coredns-7db6d8ff4d-4t9p6\" (UID: \"f8a81e99-145d-4fc4-96b4-34ed3f664fe3\") " pod="kube-system/coredns-7db6d8ff4d-4t9p6" Feb 13 19:01:06.369963 kubelet[2573]: E0213 19:01:06.369375 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:06.370221 containerd[1442]: time="2025-02-13T19:01:06.370153720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h8mrc,Uid:aabade59-19d2-4e3c-a99c-efa06485b1e0,Namespace:kube-system,Attempt:0,}" Feb 13 19:01:06.378177 kubelet[2573]: E0213 19:01:06.378118 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:06.378757 containerd[1442]: time="2025-02-13T19:01:06.378658174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4t9p6,Uid:f8a81e99-145d-4fc4-96b4-34ed3f664fe3,Namespace:kube-system,Attempt:0,}" Feb 13 19:01:06.440385 containerd[1442]: time="2025-02-13T19:01:06.440337237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h8mrc,Uid:aabade59-19d2-4e3c-a99c-efa06485b1e0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bcbe4a6347666a1d1ec778b803ad7420262dbbfcec2d2864d28db0b4b25db15c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:01:06.440898 kubelet[2573]: E0213 19:01:06.440668 2573 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbe4a6347666a1d1ec778b803ad7420262dbbfcec2d2864d28db0b4b25db15c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:01:06.440898 kubelet[2573]: E0213 19:01:06.440745 2573 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbe4a6347666a1d1ec778b803ad7420262dbbfcec2d2864d28db0b4b25db15c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-h8mrc" Feb 13 19:01:06.440898 kubelet[2573]: E0213 19:01:06.440763 2573 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbe4a6347666a1d1ec778b803ad7420262dbbfcec2d2864d28db0b4b25db15c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-h8mrc" Feb 13 19:01:06.441071 kubelet[2573]: E0213 19:01:06.440809 2573 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-h8mrc_kube-system(aabade59-19d2-4e3c-a99c-efa06485b1e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-h8mrc_kube-system(aabade59-19d2-4e3c-a99c-efa06485b1e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcbe4a6347666a1d1ec778b803ad7420262dbbfcec2d2864d28db0b4b25db15c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-h8mrc" podUID="aabade59-19d2-4e3c-a99c-efa06485b1e0" Feb 13 19:01:06.443002 containerd[1442]: time="2025-02-13T19:01:06.442920561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4t9p6,Uid:f8a81e99-145d-4fc4-96b4-34ed3f664fe3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ad11fb98145108ce3630ef485e2e2621d892b0e902c01d7f3dee48e232e62cd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:01:06.443089 kubelet[2573]: E0213 19:01:06.443071 2573 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ad11fb98145108ce3630ef485e2e2621d892b0e902c01d7f3dee48e232e62cd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:01:06.443120 kubelet[2573]: E0213 19:01:06.443108 2573 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ad11fb98145108ce3630ef485e2e2621d892b0e902c01d7f3dee48e232e62cd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-4t9p6" Feb 13 19:01:06.443143 kubelet[2573]: E0213 19:01:06.443123 2573 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ad11fb98145108ce3630ef485e2e2621d892b0e902c01d7f3dee48e232e62cd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-4t9p6" Feb 13 19:01:06.443221 kubelet[2573]: E0213 19:01:06.443153 2573 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4t9p6_kube-system(f8a81e99-145d-4fc4-96b4-34ed3f664fe3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4t9p6_kube-system(f8a81e99-145d-4fc4-96b4-34ed3f664fe3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ad11fb98145108ce3630ef485e2e2621d892b0e902c01d7f3dee48e232e62cd\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-4t9p6" podUID="f8a81e99-145d-4fc4-96b4-34ed3f664fe3" Feb 13 19:01:06.687514 kubelet[2573]: E0213 19:01:06.687444 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:06.691397 containerd[1442]: time="2025-02-13T19:01:06.691348615Z" level=info msg="CreateContainer within sandbox \"f38b3328f490f1f6865f425d0f43e4a457400f074c9bcf812422f22b533aee95\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 19:01:06.703820 containerd[1442]: time="2025-02-13T19:01:06.703706316Z" level=info msg="CreateContainer within sandbox \"f38b3328f490f1f6865f425d0f43e4a457400f074c9bcf812422f22b533aee95\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"700b91845253311756209541186f3b8bcf7429ca9d85081a4823c069c5e0cf93\"" Feb 13 19:01:06.704358 containerd[1442]: time="2025-02-13T19:01:06.704333517Z" level=info msg="StartContainer for \"700b91845253311756209541186f3b8bcf7429ca9d85081a4823c069c5e0cf93\"" Feb 13 19:01:06.728399 systemd[1]: Started cri-containerd-700b91845253311756209541186f3b8bcf7429ca9d85081a4823c069c5e0cf93.scope - libcontainer container 700b91845253311756209541186f3b8bcf7429ca9d85081a4823c069c5e0cf93. Feb 13 19:01:06.755460 containerd[1442]: time="2025-02-13T19:01:06.755406482Z" level=info msg="StartContainer for \"700b91845253311756209541186f3b8bcf7429ca9d85081a4823c069c5e0cf93\" returns successfully" Feb 13 19:01:07.691147 kubelet[2573]: E0213 19:01:07.691099 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:07.701960 kubelet[2573]: I0213 19:01:07.701891 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-n6xsv" podStartSLOduration=2.780246595 podStartE2EDuration="7.701865347s" podCreationTimestamp="2025-02-13 19:01:00 +0000 UTC" firstStartedPulling="2025-02-13 19:01:00.857828479 +0000 UTC m=+14.315115828" lastFinishedPulling="2025-02-13 19:01:05.779447231 +0000 UTC m=+19.236734580" observedRunningTime="2025-02-13 19:01:07.701758147 +0000 UTC m=+21.159045496" watchObservedRunningTime="2025-02-13 19:01:07.701865347 +0000 UTC m=+21.159152696" Feb 13 19:01:07.848713 systemd-networkd[1385]: flannel.1: Link UP Feb 13 19:01:07.848722 systemd-networkd[1385]: flannel.1: Gained carrier Feb 13 19:01:08.693400 kubelet[2573]: E0213 19:01:08.692957 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:09.102403 systemd-networkd[1385]: flannel.1: Gained IPv6LL Feb 13 19:01:12.712550 systemd[1]: Started sshd@5-10.0.0.86:22-10.0.0.1:46758.service - OpenSSH per-connection server daemon (10.0.0.1:46758). Feb 13 19:01:12.768312 sshd[3223]: Accepted publickey for core from 10.0.0.1 port 46758 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:01:12.770137 sshd-session[3223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:12.774932 systemd-logind[1424]: New session 6 of user core. Feb 13 19:01:12.791435 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:01:12.929512 sshd[3225]: Connection closed by 10.0.0.1 port 46758 Feb 13 19:01:12.930440 sshd-session[3223]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:12.933623 systemd[1]: sshd@5-10.0.0.86:22-10.0.0.1:46758.service: Deactivated successfully. Feb 13 19:01:12.938190 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:01:12.940338 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:01:12.942643 systemd-logind[1424]: Removed session 6. Feb 13 19:01:17.944930 systemd[1]: Started sshd@6-10.0.0.86:22-10.0.0.1:46770.service - OpenSSH per-connection server daemon (10.0.0.1:46770). Feb 13 19:01:17.991305 sshd[3267]: Accepted publickey for core from 10.0.0.1 port 46770 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:01:17.992603 sshd-session[3267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:17.996223 systemd-logind[1424]: New session 7 of user core. Feb 13 19:01:18.002378 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:01:18.112198 sshd[3284]: Connection closed by 10.0.0.1 port 46770 Feb 13 19:01:18.111648 sshd-session[3267]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:18.115030 systemd[1]: sshd@6-10.0.0.86:22-10.0.0.1:46770.service: Deactivated successfully. Feb 13 19:01:18.116617 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:01:18.117164 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:01:18.117889 systemd-logind[1424]: Removed session 7. Feb 13 19:01:18.624477 kubelet[2573]: E0213 19:01:18.624385 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:18.625995 containerd[1442]: time="2025-02-13T19:01:18.625741681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h8mrc,Uid:aabade59-19d2-4e3c-a99c-efa06485b1e0,Namespace:kube-system,Attempt:0,}" Feb 13 19:01:18.655813 systemd-networkd[1385]: cni0: Link UP Feb 13 19:01:18.655824 systemd-networkd[1385]: cni0: Gained carrier Feb 13 19:01:18.658131 systemd-networkd[1385]: cni0: Lost carrier Feb 13 19:01:18.660890 systemd-networkd[1385]: vethe8e63973: Link UP Feb 13 19:01:18.662741 kernel: cni0: port 1(vethe8e63973) entered blocking state Feb 13 19:01:18.662809 kernel: cni0: port 1(vethe8e63973) entered disabled state Feb 13 19:01:18.662824 kernel: vethe8e63973: entered allmulticast mode Feb 13 19:01:18.665903 kernel: vethe8e63973: entered promiscuous mode Feb 13 19:01:18.665953 kernel: cni0: port 1(vethe8e63973) entered blocking state Feb 13 19:01:18.665968 kernel: cni0: port 1(vethe8e63973) entered forwarding state Feb 13 19:01:18.669249 kernel: cni0: port 1(vethe8e63973) entered disabled state Feb 13 19:01:18.674259 kernel: cni0: port 1(vethe8e63973) entered blocking state Feb 13 19:01:18.674299 kernel: cni0: port 1(vethe8e63973) entered forwarding state Feb 13 19:01:18.674392 systemd-networkd[1385]: vethe8e63973: Gained carrier Feb 13 19:01:18.674680 systemd-networkd[1385]: cni0: Gained carrier Feb 13 19:01:18.676916 containerd[1442]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"} Feb 13 19:01:18.676916 containerd[1442]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:01:18.692330 containerd[1442]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T19:01:18.692222132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:01:18.692330 containerd[1442]: time="2025-02-13T19:01:18.692301372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:01:18.692793 containerd[1442]: time="2025-02-13T19:01:18.692372332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:18.692990 containerd[1442]: time="2025-02-13T19:01:18.692958012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:18.713413 systemd[1]: Started cri-containerd-442b28b41767c0e33c4f12abc22b5fc157c31df1a0aa42f9251eec07f8d574fd.scope - libcontainer container 442b28b41767c0e33c4f12abc22b5fc157c31df1a0aa42f9251eec07f8d574fd. Feb 13 19:01:18.723891 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:01:18.741302 containerd[1442]: time="2025-02-13T19:01:18.741264049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h8mrc,Uid:aabade59-19d2-4e3c-a99c-efa06485b1e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"442b28b41767c0e33c4f12abc22b5fc157c31df1a0aa42f9251eec07f8d574fd\"" Feb 13 19:01:18.742326 kubelet[2573]: E0213 19:01:18.742301 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:18.745013 containerd[1442]: time="2025-02-13T19:01:18.744979132Z" level=info msg="CreateContainer within sandbox \"442b28b41767c0e33c4f12abc22b5fc157c31df1a0aa42f9251eec07f8d574fd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:01:18.759142 containerd[1442]: time="2025-02-13T19:01:18.759080983Z" level=info msg="CreateContainer within sandbox \"442b28b41767c0e33c4f12abc22b5fc157c31df1a0aa42f9251eec07f8d574fd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dc1edd76d130c1f914b5ecd141febb4944ca5d9a693c3facf6bdacd19fde1d68\"" Feb 13 19:01:18.760124 containerd[1442]: time="2025-02-13T19:01:18.759969784Z" level=info msg="StartContainer for \"dc1edd76d130c1f914b5ecd141febb4944ca5d9a693c3facf6bdacd19fde1d68\"" Feb 13 19:01:18.785447 systemd[1]: Started cri-containerd-dc1edd76d130c1f914b5ecd141febb4944ca5d9a693c3facf6bdacd19fde1d68.scope - libcontainer container dc1edd76d130c1f914b5ecd141febb4944ca5d9a693c3facf6bdacd19fde1d68. Feb 13 19:01:18.813941 containerd[1442]: time="2025-02-13T19:01:18.813496585Z" level=info msg="StartContainer for \"dc1edd76d130c1f914b5ecd141febb4944ca5d9a693c3facf6bdacd19fde1d68\" returns successfully" Feb 13 19:01:19.635668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241150370.mount: Deactivated successfully. Feb 13 19:01:19.717038 kubelet[2573]: E0213 19:01:19.717010 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:19.729103 kubelet[2573]: I0213 19:01:19.728416 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-h8mrc" podStartSLOduration=19.728400653 podStartE2EDuration="19.728400653s" podCreationTimestamp="2025-02-13 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:01:19.728298293 +0000 UTC m=+33.185585642" watchObservedRunningTime="2025-02-13 19:01:19.728400653 +0000 UTC m=+33.185688002" Feb 13 19:01:20.558422 systemd-networkd[1385]: vethe8e63973: Gained IPv6LL Feb 13 19:01:20.686478 systemd-networkd[1385]: cni0: Gained IPv6LL Feb 13 19:01:20.731244 kubelet[2573]: E0213 19:01:20.730870 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:21.624514 kubelet[2573]: E0213 19:01:21.624334 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:21.625383 containerd[1442]: time="2025-02-13T19:01:21.624906959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4t9p6,Uid:f8a81e99-145d-4fc4-96b4-34ed3f664fe3,Namespace:kube-system,Attempt:0,}" Feb 13 19:01:21.657443 systemd-networkd[1385]: veth060bb5b6: Link UP Feb 13 19:01:21.660445 kernel: cni0: port 2(veth060bb5b6) entered blocking state Feb 13 19:01:21.660516 kernel: cni0: port 2(veth060bb5b6) entered disabled state Feb 13 19:01:21.661267 kernel: veth060bb5b6: entered allmulticast mode Feb 13 19:01:21.676145 kernel: veth060bb5b6: entered promiscuous mode Feb 13 19:01:21.683457 kernel: cni0: port 2(veth060bb5b6) entered blocking state Feb 13 19:01:21.683573 kernel: cni0: port 2(veth060bb5b6) entered forwarding state Feb 13 19:01:21.681918 systemd-networkd[1385]: veth060bb5b6: Gained carrier Feb 13 19:01:21.686540 containerd[1442]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} Feb 13 19:01:21.686540 containerd[1442]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:01:21.714778 containerd[1442]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T19:01:21.714343696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:01:21.714778 containerd[1442]: time="2025-02-13T19:01:21.714540336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:01:21.714778 containerd[1442]: time="2025-02-13T19:01:21.714571536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:21.714778 containerd[1442]: time="2025-02-13T19:01:21.714703376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:01:21.732217 kubelet[2573]: E0213 19:01:21.732185 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:21.745452 systemd[1]: Started cri-containerd-2b1fbd8aab3d036798f784a3f53ccb5d58ba93fd106ecf53c4a4b14b14165535.scope - libcontainer container 2b1fbd8aab3d036798f784a3f53ccb5d58ba93fd106ecf53c4a4b14b14165535. Feb 13 19:01:21.755828 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:01:21.775998 containerd[1442]: time="2025-02-13T19:01:21.775956055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4t9p6,Uid:f8a81e99-145d-4fc4-96b4-34ed3f664fe3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b1fbd8aab3d036798f784a3f53ccb5d58ba93fd106ecf53c4a4b14b14165535\"" Feb 13 19:01:21.777205 kubelet[2573]: E0213 19:01:21.776702 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:21.779633 containerd[1442]: time="2025-02-13T19:01:21.779561057Z" level=info msg="CreateContainer within sandbox \"2b1fbd8aab3d036798f784a3f53ccb5d58ba93fd106ecf53c4a4b14b14165535\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:01:21.797106 containerd[1442]: time="2025-02-13T19:01:21.797053148Z" level=info msg="CreateContainer within sandbox \"2b1fbd8aab3d036798f784a3f53ccb5d58ba93fd106ecf53c4a4b14b14165535\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6693617ab59d7318a36921fa3f63213b34aae6be8d50ea2bbf13eb6beebe432a\"" Feb 13 19:01:21.799053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1358483802.mount: Deactivated successfully. Feb 13 19:01:21.801149 containerd[1442]: time="2025-02-13T19:01:21.799444590Z" level=info msg="StartContainer for \"6693617ab59d7318a36921fa3f63213b34aae6be8d50ea2bbf13eb6beebe432a\"" Feb 13 19:01:21.827440 systemd[1]: Started cri-containerd-6693617ab59d7318a36921fa3f63213b34aae6be8d50ea2bbf13eb6beebe432a.scope - libcontainer container 6693617ab59d7318a36921fa3f63213b34aae6be8d50ea2bbf13eb6beebe432a. Feb 13 19:01:21.850679 containerd[1442]: time="2025-02-13T19:01:21.850637582Z" level=info msg="StartContainer for \"6693617ab59d7318a36921fa3f63213b34aae6be8d50ea2bbf13eb6beebe432a\" returns successfully" Feb 13 19:01:22.736064 kubelet[2573]: E0213 19:01:22.735215 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:22.756367 kubelet[2573]: I0213 19:01:22.756303 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4t9p6" podStartSLOduration=22.756273365 podStartE2EDuration="22.756273365s" podCreationTimestamp="2025-02-13 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:01:22.745205519 +0000 UTC m=+36.202492868" watchObservedRunningTime="2025-02-13 19:01:22.756273365 +0000 UTC m=+36.213560714" Feb 13 19:01:23.121939 systemd[1]: Started sshd@7-10.0.0.86:22-10.0.0.1:45266.service - OpenSSH per-connection server daemon (10.0.0.1:45266). Feb 13 19:01:23.168469 sshd[3556]: Accepted publickey for core from 10.0.0.1 port 45266 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:01:23.170047 sshd-session[3556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:23.174106 systemd-logind[1424]: New session 8 of user core. Feb 13 19:01:23.179399 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:01:23.296291 sshd[3558]: Connection closed by 10.0.0.1 port 45266 Feb 13 19:01:23.296675 sshd-session[3556]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:23.307893 systemd[1]: sshd@7-10.0.0.86:22-10.0.0.1:45266.service: Deactivated successfully. Feb 13 19:01:23.310019 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:01:23.314066 systemd-logind[1424]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:01:23.325407 systemd[1]: Started sshd@8-10.0.0.86:22-10.0.0.1:45278.service - OpenSSH per-connection server daemon (10.0.0.1:45278). Feb 13 19:01:23.326909 systemd-logind[1424]: Removed session 8. Feb 13 19:01:23.371642 sshd[3571]: Accepted publickey for core from 10.0.0.1 port 45278 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:01:23.372953 sshd-session[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:23.382707 systemd-logind[1424]: New session 9 of user core. Feb 13 19:01:23.398701 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:01:23.559828 sshd[3573]: Connection closed by 10.0.0.1 port 45278 Feb 13 19:01:23.560946 sshd-session[3571]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:23.567567 systemd-networkd[1385]: veth060bb5b6: Gained IPv6LL Feb 13 19:01:23.571091 systemd[1]: sshd@8-10.0.0.86:22-10.0.0.1:45278.service: Deactivated successfully. Feb 13 19:01:23.575488 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:01:23.578556 systemd-logind[1424]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:01:23.595589 systemd[1]: Started sshd@9-10.0.0.86:22-10.0.0.1:45288.service - OpenSSH per-connection server daemon (10.0.0.1:45288). Feb 13 19:01:23.596693 systemd-logind[1424]: Removed session 9. Feb 13 19:01:23.634916 sshd[3585]: Accepted publickey for core from 10.0.0.1 port 45288 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:01:23.636184 sshd-session[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:23.639982 systemd-logind[1424]: New session 10 of user core. Feb 13 19:01:23.652415 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:01:23.737349 kubelet[2573]: E0213 19:01:23.737322 2573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:01:23.771871 sshd[3587]: Connection closed by 10.0.0.1 port 45288 Feb 13 19:01:23.772434 sshd-session[3585]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:23.776039 systemd[1]: sshd@9-10.0.0.86:22-10.0.0.1:45288.service: Deactivated successfully. Feb 13 19:01:23.777754 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:01:23.781926 systemd-logind[1424]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:01:23.785704 systemd-logind[1424]: Removed session 10. Feb 13 19:01:28.783037 systemd[1]: Started sshd@10-10.0.0.86:22-10.0.0.1:45298.service - OpenSSH per-connection server daemon (10.0.0.1:45298). Feb 13 19:01:28.833290 sshd[3620]: Accepted publickey for core from 10.0.0.1 port 45298 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:01:28.834176 sshd-session[3620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:28.839444 systemd-logind[1424]: New session 11 of user core. Feb 13 19:01:28.845411 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:01:28.976737 sshd[3622]: Connection closed by 10.0.0.1 port 45298 Feb 13 19:01:28.977129 sshd-session[3620]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:28.987823 systemd[1]: sshd@10-10.0.0.86:22-10.0.0.1:45298.service: Deactivated successfully. Feb 13 19:01:28.989434 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:01:28.991163 systemd-logind[1424]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:01:28.999569 systemd[1]: Started sshd@11-10.0.0.86:22-10.0.0.1:45312.service - OpenSSH per-connection server daemon (10.0.0.1:45312). Feb 13 19:01:29.001314 systemd-logind[1424]: Removed session 11. Feb 13 19:01:29.041383 sshd[3635]: Accepted publickey for core from 10.0.0.1 port 45312 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:01:29.043566 sshd-session[3635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:29.047954 systemd-logind[1424]: New session 12 of user core. Feb 13 19:01:29.060458 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:01:29.270731 sshd[3637]: Connection closed by 10.0.0.1 port 45312 Feb 13 19:01:29.270986 sshd-session[3635]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:29.283993 systemd[1]: sshd@11-10.0.0.86:22-10.0.0.1:45312.service: Deactivated successfully. Feb 13 19:01:29.286193 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:01:29.287876 systemd-logind[1424]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:01:29.290372 systemd[1]: Started sshd@12-10.0.0.86:22-10.0.0.1:45326.service - OpenSSH per-connection server daemon (10.0.0.1:45326). Feb 13 19:01:29.291078 systemd-logind[1424]: Removed session 12. Feb 13 19:01:29.334602 sshd[3647]: Accepted publickey for core from 10.0.0.1 port 45326 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:01:29.335927 sshd-session[3647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:29.340291 systemd-logind[1424]: New session 13 of user core. Feb 13 19:01:29.349435 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:01:30.530053 sshd[3649]: Connection closed by 10.0.0.1 port 45326 Feb 13 19:01:30.530715 sshd-session[3647]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:30.541829 systemd[1]: sshd@12-10.0.0.86:22-10.0.0.1:45326.service: Deactivated successfully. Feb 13 19:01:30.546764 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:01:30.548523 systemd-logind[1424]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:01:30.562782 systemd[1]: Started sshd@13-10.0.0.86:22-10.0.0.1:45342.service - OpenSSH per-connection server daemon (10.0.0.1:45342). Feb 13 19:01:30.563856 systemd-logind[1424]: Removed session 13. Feb 13 19:01:30.609488 sshd[3667]: Accepted publickey for core from 10.0.0.1 port 45342 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:01:30.611024 sshd-session[3667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:30.615450 systemd-logind[1424]: New session 14 of user core. Feb 13 19:01:30.625436 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:01:30.866170 sshd[3669]: Connection closed by 10.0.0.1 port 45342 Feb 13 19:01:30.866833 sshd-session[3667]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:30.881804 systemd[1]: sshd@13-10.0.0.86:22-10.0.0.1:45342.service: Deactivated successfully. Feb 13 19:01:30.884711 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:01:30.887722 systemd-logind[1424]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:01:30.896635 systemd[1]: Started sshd@14-10.0.0.86:22-10.0.0.1:45356.service - OpenSSH per-connection server daemon (10.0.0.1:45356). Feb 13 19:01:30.897708 systemd-logind[1424]: Removed session 14. Feb 13 19:01:30.938272 sshd[3681]: Accepted publickey for core from 10.0.0.1 port 45356 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:01:30.939646 sshd-session[3681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:30.945328 systemd-logind[1424]: New session 15 of user core. Feb 13 19:01:30.957489 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:01:31.083739 sshd[3683]: Connection closed by 10.0.0.1 port 45356 Feb 13 19:01:31.084725 sshd-session[3681]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:31.089377 systemd[1]: sshd@14-10.0.0.86:22-10.0.0.1:45356.service: Deactivated successfully. Feb 13 19:01:31.092139 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:01:31.093107 systemd-logind[1424]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:01:31.094182 systemd-logind[1424]: Removed session 15. Feb 13 19:01:36.099471 systemd[1]: Started sshd@15-10.0.0.86:22-10.0.0.1:47298.service - OpenSSH per-connection server daemon (10.0.0.1:47298). Feb 13 19:01:36.153280 sshd[3721]: Accepted publickey for core from 10.0.0.1 port 47298 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:01:36.153950 sshd-session[3721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:36.158027 systemd-logind[1424]: New session 16 of user core. Feb 13 19:01:36.167512 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:01:36.289412 sshd[3723]: Connection closed by 10.0.0.1 port 47298 Feb 13 19:01:36.289770 sshd-session[3721]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:36.293737 systemd[1]: sshd@15-10.0.0.86:22-10.0.0.1:47298.service: Deactivated successfully. Feb 13 19:01:36.297069 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:01:36.297955 systemd-logind[1424]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:01:36.298880 systemd-logind[1424]: Removed session 16. Feb 13 19:01:41.300525 systemd[1]: Started sshd@16-10.0.0.86:22-10.0.0.1:47312.service - OpenSSH per-connection server daemon (10.0.0.1:47312). Feb 13 19:01:41.352209 sshd[3757]: Accepted publickey for core from 10.0.0.1 port 47312 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:01:41.353786 sshd-session[3757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:41.358889 systemd-logind[1424]: New session 17 of user core. Feb 13 19:01:41.368533 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:01:41.486441 sshd[3759]: Connection closed by 10.0.0.1 port 47312 Feb 13 19:01:41.487037 sshd-session[3757]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:41.489930 systemd[1]: sshd@16-10.0.0.86:22-10.0.0.1:47312.service: Deactivated successfully. Feb 13 19:01:41.492505 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:01:41.494203 systemd-logind[1424]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:01:41.495310 systemd-logind[1424]: Removed session 17. Feb 13 19:01:46.497282 systemd[1]: Started sshd@17-10.0.0.86:22-10.0.0.1:48550.service - OpenSSH per-connection server daemon (10.0.0.1:48550). Feb 13 19:01:46.550663 sshd[3792]: Accepted publickey for core from 10.0.0.1 port 48550 ssh2: RSA SHA256:eK9a9kwNbEFhVjkl7Sg3YVwfSWYuGJlM/sx2i90U/0s Feb 13 19:01:46.552041 sshd-session[3792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:46.557406 systemd-logind[1424]: New session 18 of user core. Feb 13 19:01:46.564431 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:01:46.679792 sshd[3794]: Connection closed by 10.0.0.1 port 48550 Feb 13 19:01:46.680202 sshd-session[3792]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:46.685249 systemd[1]: sshd@17-10.0.0.86:22-10.0.0.1:48550.service: Deactivated successfully. Feb 13 19:01:46.687309 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:01:46.688485 systemd-logind[1424]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:01:46.689427 systemd-logind[1424]: Removed session 18.