Feb 13 20:27:53.976005 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:27:53.976025 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 20:27:53.976034 kernel: KASLR enabled Feb 13 20:27:53.976040 kernel: efi: EFI v2.7 by EDK II Feb 13 20:27:53.976046 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 20:27:53.976052 kernel: random: crng init done Feb 13 20:27:53.976059 kernel: ACPI: Early table checksum verification disabled Feb 13 20:27:53.976065 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 20:27:53.976071 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 20:27:53.976079 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.976086 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.976096 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.976104 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.976113 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.976121 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.976129 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.976136 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.976143 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.976149 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 20:27:53.976156 kernel: NUMA: Failed to initialise from firmware Feb 13 20:27:53.976165 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:27:53.976171 kernel: NUMA: NODE_DATA [mem 0xdc959800-0xdc95efff] Feb 13 20:27:53.976178 kernel: Zone ranges: Feb 13 20:27:53.976185 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:27:53.976191 kernel: DMA32 empty Feb 13 20:27:53.976199 kernel: Normal empty Feb 13 20:27:53.976206 kernel: Movable zone start for each node Feb 13 20:27:53.976212 kernel: Early memory node ranges Feb 13 20:27:53.976219 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 20:27:53.976225 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 20:27:53.976231 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 20:27:53.976238 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 20:27:53.976244 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 20:27:53.976251 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 20:27:53.976257 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 20:27:53.976263 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:27:53.976270 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 20:27:53.976277 kernel: psci: probing for conduit method from ACPI. Feb 13 20:27:53.976284 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:27:53.976290 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:27:53.976299 kernel: psci: Trusted OS migration not required Feb 13 20:27:53.976306 kernel: psci: SMC Calling Convention v1.1 Feb 13 20:27:53.976313 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 20:27:53.976321 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:27:53.976329 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:27:53.976336 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 20:27:53.976343 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:27:53.976349 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:27:53.976356 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:27:53.976363 kernel: CPU features: detected: Spectre-v4 Feb 13 20:27:53.976370 kernel: CPU features: detected: Spectre-BHB Feb 13 20:27:53.976377 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:27:53.976384 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:27:53.976392 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:27:53.976398 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:27:53.976405 kernel: alternatives: applying boot alternatives Feb 13 20:27:53.976413 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:27:53.976420 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:27:53.976427 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:27:53.976434 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:27:53.976441 kernel: Fallback order for Node 0: 0 Feb 13 20:27:53.976448 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 20:27:53.976455 kernel: Policy zone: DMA Feb 13 20:27:53.976461 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:27:53.976469 kernel: software IO TLB: area num 4. Feb 13 20:27:53.976476 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 20:27:53.976484 kernel: Memory: 2386536K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185752K reserved, 0K cma-reserved) Feb 13 20:27:53.976491 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 20:27:53.976498 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:27:53.976505 kernel: rcu: RCU event tracing is enabled. Feb 13 20:27:53.976512 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 20:27:53.976519 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:27:53.976540 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:27:53.976547 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:27:53.976554 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 20:27:53.976562 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:27:53.976570 kernel: GICv3: 256 SPIs implemented Feb 13 20:27:53.976577 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:27:53.976583 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:27:53.976590 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:27:53.976597 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 20:27:53.976604 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 20:27:53.976611 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 20:27:53.976619 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 20:27:53.976633 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 20:27:53.976640 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 20:27:53.976647 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:27:53.976655 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:27:53.976662 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:27:53.976670 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:27:53.976677 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:27:53.976685 kernel: arm-pv: using stolen time PV Feb 13 20:27:53.976692 kernel: Console: colour dummy device 80x25 Feb 13 20:27:53.976699 kernel: ACPI: Core revision 20230628 Feb 13 20:27:53.976706 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:27:53.976714 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:27:53.976721 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:27:53.976730 kernel: landlock: Up and running. Feb 13 20:27:53.976737 kernel: SELinux: Initializing. Feb 13 20:27:53.976744 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:27:53.976751 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:27:53.976759 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:27:53.976766 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:27:53.976773 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:27:53.976780 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:27:53.976787 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 20:27:53.976795 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 20:27:53.976802 kernel: Remapping and enabling EFI services. Feb 13 20:27:53.976809 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:27:53.976816 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:27:53.976823 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 20:27:53.976830 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 20:27:53.976837 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:27:53.976844 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:27:53.976851 kernel: Detected PIPT I-cache on CPU2 Feb 13 20:27:53.976858 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 20:27:53.976871 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 20:27:53.976879 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:27:53.976890 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 20:27:53.976899 kernel: Detected PIPT I-cache on CPU3 Feb 13 20:27:53.976906 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 20:27:53.976913 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 20:27:53.976921 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:27:53.976928 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 20:27:53.976935 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 20:27:53.976943 kernel: SMP: Total of 4 processors activated. Feb 13 20:27:53.976951 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:27:53.976958 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:27:53.976965 kernel: CPU features: detected: Common not Private translations Feb 13 20:27:53.976972 kernel: CPU features: detected: CRC32 instructions Feb 13 20:27:53.976980 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 20:27:53.976987 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:27:53.976994 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:27:53.977002 kernel: CPU features: detected: Privileged Access Never Feb 13 20:27:53.977010 kernel: CPU features: detected: RAS Extension Support Feb 13 20:27:53.977017 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 20:27:53.977024 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:27:53.977032 kernel: alternatives: applying system-wide alternatives Feb 13 20:27:53.977039 kernel: devtmpfs: initialized Feb 13 20:27:53.977047 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:27:53.977054 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 20:27:53.977062 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:27:53.977071 kernel: SMBIOS 3.0.0 present. Feb 13 20:27:53.977079 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 20:27:53.977086 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:27:53.977094 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:27:53.977101 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:27:53.977109 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:27:53.977117 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:27:53.977128 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Feb 13 20:27:53.977137 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:27:53.977150 kernel: cpuidle: using governor menu Feb 13 20:27:53.977158 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:27:53.977165 kernel: ASID allocator initialised with 32768 entries Feb 13 20:27:53.977172 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:27:53.977180 kernel: Serial: AMBA PL011 UART driver Feb 13 20:27:53.977187 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:27:53.977194 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:27:53.977201 kernel: Modules: 509040 pages in range for PLT usage Feb 13 20:27:53.977209 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:27:53.977217 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:27:53.977225 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:27:53.977232 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:27:53.977239 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:27:53.977246 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:27:53.977253 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:27:53.977261 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:27:53.977268 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:27:53.977275 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:27:53.977283 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:27:53.977291 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:27:53.977298 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:27:53.977305 kernel: ACPI: Interpreter enabled Feb 13 20:27:53.977312 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:27:53.977320 kernel: ACPI: MCFG table detected, 1 entries Feb 13 20:27:53.977327 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:27:53.977335 kernel: printk: console [ttyAMA0] enabled Feb 13 20:27:53.977345 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:27:53.977484 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:27:53.977561 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 20:27:53.977641 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 20:27:53.977711 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 20:27:53.977780 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 20:27:53.977790 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 20:27:53.977797 kernel: PCI host bridge to bus 0000:00 Feb 13 20:27:53.977881 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 20:27:53.977945 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 20:27:53.978007 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 20:27:53.978067 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:27:53.978149 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 20:27:53.978230 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:27:53.978307 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 20:27:53.978377 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 20:27:53.978445 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:27:53.978514 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:27:53.978583 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 20:27:53.978663 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 20:27:53.978726 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 20:27:53.978786 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 20:27:53.978850 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 20:27:53.978860 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 20:27:53.978873 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 20:27:53.978881 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 20:27:53.978888 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 20:27:53.978896 kernel: iommu: Default domain type: Translated Feb 13 20:27:53.978903 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:27:53.978910 kernel: efivars: Registered efivars operations Feb 13 20:27:53.978920 kernel: vgaarb: loaded Feb 13 20:27:53.978928 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:27:53.978935 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:27:53.978943 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:27:53.978950 kernel: pnp: PnP ACPI init Feb 13 20:27:53.979032 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 20:27:53.979043 kernel: pnp: PnP ACPI: found 1 devices Feb 13 20:27:53.979050 kernel: NET: Registered PF_INET protocol family Feb 13 20:27:53.979060 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:27:53.979067 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:27:53.979075 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:27:53.979082 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:27:53.979089 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:27:53.979097 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:27:53.979105 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:27:53.979112 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:27:53.979119 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:27:53.979128 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:27:53.979136 kernel: kvm [1]: HYP mode not available Feb 13 20:27:53.979143 kernel: Initialise system trusted keyrings Feb 13 20:27:53.979150 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:27:53.979158 kernel: Key type asymmetric registered Feb 13 20:27:53.979165 kernel: Asymmetric key parser 'x509' registered Feb 13 20:27:53.979172 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:27:53.979180 kernel: io scheduler mq-deadline registered Feb 13 20:27:53.979187 kernel: io scheduler kyber registered Feb 13 20:27:53.979196 kernel: io scheduler bfq registered Feb 13 20:27:53.979203 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 20:27:53.979210 kernel: ACPI: button: Power Button [PWRB] Feb 13 20:27:53.979232 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 20:27:53.979299 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 20:27:53.979309 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:27:53.979317 kernel: thunder_xcv, ver 1.0 Feb 13 20:27:53.979324 kernel: thunder_bgx, ver 1.0 Feb 13 20:27:53.979331 kernel: nicpf, ver 1.0 Feb 13 20:27:53.979340 kernel: nicvf, ver 1.0 Feb 13 20:27:53.979415 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:27:53.979478 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:27:53 UTC (1739478473) Feb 13 20:27:53.979488 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:27:53.979495 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 20:27:53.979503 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:27:53.979510 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:27:53.979517 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:27:53.979527 kernel: Segment Routing with IPv6 Feb 13 20:27:53.979534 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:27:53.979541 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:27:53.979549 kernel: Key type dns_resolver registered Feb 13 20:27:53.979556 kernel: registered taskstats version 1 Feb 13 20:27:53.979563 kernel: Loading compiled-in X.509 certificates Feb 13 20:27:53.979570 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 20:27:53.979577 kernel: Key type .fscrypt registered Feb 13 20:27:53.979585 kernel: Key type fscrypt-provisioning registered Feb 13 20:27:53.979593 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:27:53.979601 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:27:53.979608 kernel: ima: No architecture policies found Feb 13 20:27:53.979615 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:27:53.979693 kernel: clk: Disabling unused clocks Feb 13 20:27:53.979702 kernel: Freeing unused kernel memory: 39360K Feb 13 20:27:53.979709 kernel: Run /init as init process Feb 13 20:27:53.979716 kernel: with arguments: Feb 13 20:27:53.979723 kernel: /init Feb 13 20:27:53.979732 kernel: with environment: Feb 13 20:27:53.979739 kernel: HOME=/ Feb 13 20:27:53.979747 kernel: TERM=linux Feb 13 20:27:53.979754 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:27:53.979763 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:27:53.979772 systemd[1]: Detected virtualization kvm. Feb 13 20:27:53.979780 systemd[1]: Detected architecture arm64. Feb 13 20:27:53.979788 systemd[1]: Running in initrd. Feb 13 20:27:53.979797 systemd[1]: No hostname configured, using default hostname. Feb 13 20:27:53.979805 systemd[1]: Hostname set to . Feb 13 20:27:53.979813 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:27:53.979821 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:27:53.979828 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:27:53.979836 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:27:53.979844 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:27:53.979852 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:27:53.979862 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:27:53.979876 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:27:53.979886 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:27:53.979894 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:27:53.979902 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:27:53.979910 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:27:53.979920 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:27:53.979928 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:27:53.979936 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:27:53.979944 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:27:53.979951 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:27:53.979959 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:27:53.979967 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:27:53.979975 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:27:53.979983 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:27:53.979992 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:27:53.980000 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:27:53.980008 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:27:53.980016 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:27:53.980024 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:27:53.980032 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:27:53.980039 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:27:53.980047 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:27:53.980055 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:27:53.980064 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:27:53.980072 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:27:53.980080 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:27:53.980087 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:27:53.980114 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 20:27:53.980135 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:27:53.980143 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:27:53.980151 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:27:53.980161 systemd-journald[238]: Journal started Feb 13 20:27:53.980180 systemd-journald[238]: Runtime Journal (/run/log/journal/d4a136a0c76c441683bb8e47e88a6ab2) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:27:53.965398 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 20:27:53.982428 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:27:53.983277 kernel: Bridge firewalling registered Feb 13 20:27:53.983785 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 20:27:53.984136 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:27:53.985934 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:27:53.995751 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:27:53.997412 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:27:53.999446 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:27:54.002491 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:27:54.011840 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:27:54.013244 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:27:54.016346 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:27:54.027760 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:27:54.029003 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:27:54.031728 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:27:54.044456 dracut-cmdline[282]: dracut-dracut-053 Feb 13 20:27:54.046909 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:27:54.055258 systemd-resolved[278]: Positive Trust Anchors: Feb 13 20:27:54.055274 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:27:54.055306 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:27:54.060074 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 13 20:27:54.061039 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:27:54.064830 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:27:54.116644 kernel: SCSI subsystem initialized Feb 13 20:27:54.120640 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:27:54.129660 kernel: iscsi: registered transport (tcp) Feb 13 20:27:54.144037 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:27:54.144090 kernel: QLogic iSCSI HBA Driver Feb 13 20:27:54.187646 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:27:54.203810 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:27:54.221805 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:27:54.222882 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:27:54.222896 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:27:54.270656 kernel: raid6: neonx8 gen() 15777 MB/s Feb 13 20:27:54.287653 kernel: raid6: neonx4 gen() 15638 MB/s Feb 13 20:27:54.304643 kernel: raid6: neonx2 gen() 13226 MB/s Feb 13 20:27:54.321644 kernel: raid6: neonx1 gen() 10475 MB/s Feb 13 20:27:54.338642 kernel: raid6: int64x8 gen() 6950 MB/s Feb 13 20:27:54.355644 kernel: raid6: int64x4 gen() 7308 MB/s Feb 13 20:27:54.372647 kernel: raid6: int64x2 gen() 6117 MB/s Feb 13 20:27:54.389812 kernel: raid6: int64x1 gen() 5044 MB/s Feb 13 20:27:54.389833 kernel: raid6: using algorithm neonx8 gen() 15777 MB/s Feb 13 20:27:54.407737 kernel: raid6: .... xor() 11915 MB/s, rmw enabled Feb 13 20:27:54.407752 kernel: raid6: using neon recovery algorithm Feb 13 20:27:54.412646 kernel: xor: measuring software checksum speed Feb 13 20:27:54.413837 kernel: 8regs : 17373 MB/sec Feb 13 20:27:54.413849 kernel: 32regs : 19669 MB/sec Feb 13 20:27:54.415077 kernel: arm64_neon : 25688 MB/sec Feb 13 20:27:54.415091 kernel: xor: using function: arm64_neon (25688 MB/sec) Feb 13 20:27:54.465656 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:27:54.477096 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:27:54.492837 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:27:54.504832 systemd-udevd[464]: Using default interface naming scheme 'v255'. Feb 13 20:27:54.508730 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:27:54.515776 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:27:54.527745 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Feb 13 20:27:54.557292 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:27:54.564797 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:27:54.603705 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:27:54.614835 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:27:54.629171 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:27:54.631137 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:27:54.633422 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:27:54.634561 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:27:54.643832 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:27:54.655402 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:27:54.662750 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 20:27:54.669523 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 20:27:54.669620 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:27:54.669651 kernel: GPT:9289727 != 19775487 Feb 13 20:27:54.669661 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:27:54.669676 kernel: GPT:9289727 != 19775487 Feb 13 20:27:54.669685 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:27:54.669694 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:27:54.664562 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:27:54.664694 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:27:54.666102 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:27:54.667290 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:27:54.667469 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:27:54.672149 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:27:54.679826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:27:54.691606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:27:54.696718 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (525) Feb 13 20:27:54.696740 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (514) Feb 13 20:27:54.701297 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:27:54.708352 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:27:54.712259 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:27:54.713451 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:27:54.718969 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:27:54.733812 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:27:54.735532 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:27:54.740647 disk-uuid[552]: Primary Header is updated. Feb 13 20:27:54.740647 disk-uuid[552]: Secondary Entries is updated. Feb 13 20:27:54.740647 disk-uuid[552]: Secondary Header is updated. Feb 13 20:27:54.752659 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:27:54.753588 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:27:54.757652 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:27:54.760667 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:27:55.761647 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:27:55.762874 disk-uuid[553]: The operation has completed successfully. Feb 13 20:27:55.782417 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:27:55.782515 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:27:55.806791 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:27:55.809828 sh[574]: Success Feb 13 20:27:55.830079 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:27:55.860073 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:27:55.872148 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:27:55.876084 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:27:55.884122 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 20:27:55.884157 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:27:55.884168 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:27:55.885974 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:27:55.885997 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:27:55.890594 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:27:55.891694 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:27:55.901775 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:27:55.903363 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:27:55.910826 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:27:55.910874 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:27:55.910885 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:27:55.913649 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:27:55.921300 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:27:55.923648 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:27:55.929551 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:27:55.939817 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:27:56.004269 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:27:56.014799 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:27:56.045047 ignition[665]: Ignition 2.19.0 Feb 13 20:27:56.045056 ignition[665]: Stage: fetch-offline Feb 13 20:27:56.045520 systemd-networkd[766]: lo: Link UP Feb 13 20:27:56.045096 ignition[665]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:27:56.045524 systemd-networkd[766]: lo: Gained carrier Feb 13 20:27:56.045105 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:27:56.046603 systemd-networkd[766]: Enumeration completed Feb 13 20:27:56.045290 ignition[665]: parsed url from cmdline: "" Feb 13 20:27:56.046819 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:27:56.045293 ignition[665]: no config URL provided Feb 13 20:27:56.047134 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:27:56.045298 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:27:56.047138 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:27:56.045305 ignition[665]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:27:56.048303 systemd-networkd[766]: eth0: Link UP Feb 13 20:27:56.045327 ignition[665]: op(1): [started] loading QEMU firmware config module Feb 13 20:27:56.048307 systemd-networkd[766]: eth0: Gained carrier Feb 13 20:27:56.045336 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 20:27:56.048314 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:27:56.065629 ignition[665]: op(1): [finished] loading QEMU firmware config module Feb 13 20:27:56.048841 systemd[1]: Reached target network.target - Network. Feb 13 20:27:56.072672 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.9/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:27:56.091781 ignition[665]: parsing config with SHA512: bb0e9989f597fe3835d364eaf3cb3177fd90a837b3ba4924f804882ec398cd972733d4409e675d468f125cd1ad35aa7007cb260c042a4b499a7d9ae6d08b8475 Feb 13 20:27:56.096154 unknown[665]: fetched base config from "system" Feb 13 20:27:56.096166 unknown[665]: fetched user config from "qemu" Feb 13 20:27:56.096614 ignition[665]: fetch-offline: fetch-offline passed Feb 13 20:27:56.098362 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:27:56.096699 ignition[665]: Ignition finished successfully Feb 13 20:27:56.099764 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:27:56.109818 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:27:56.121090 ignition[772]: Ignition 2.19.0 Feb 13 20:27:56.121100 ignition[772]: Stage: kargs Feb 13 20:27:56.121269 ignition[772]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:27:56.121279 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:27:56.122202 ignition[772]: kargs: kargs passed Feb 13 20:27:56.125924 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:27:56.122248 ignition[772]: Ignition finished successfully Feb 13 20:27:56.143816 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:27:56.153174 ignition[780]: Ignition 2.19.0 Feb 13 20:27:56.153189 ignition[780]: Stage: disks Feb 13 20:27:56.153353 ignition[780]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:27:56.156224 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:27:56.153362 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:27:56.157459 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:27:56.154255 ignition[780]: disks: disks passed Feb 13 20:27:56.159177 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:27:56.154298 ignition[780]: Ignition finished successfully Feb 13 20:27:56.161295 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:27:56.163124 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:27:56.164588 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:27:56.178764 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:27:56.187277 systemd-resolved[278]: Detected conflict on linux IN A 10.0.0.9 Feb 13 20:27:56.187289 systemd-resolved[278]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Feb 13 20:27:56.190250 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:27:56.194899 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:27:56.197139 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:27:56.240643 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 20:27:56.240602 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:27:56.241922 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:27:56.256714 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:27:56.258997 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:27:56.260002 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:27:56.260041 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:27:56.260064 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:27:56.266316 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:27:56.268430 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:27:56.274373 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) Feb 13 20:27:56.274416 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:27:56.274428 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:27:56.274444 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:27:56.275643 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:27:56.286567 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:27:56.326519 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:27:56.330651 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:27:56.334640 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:27:56.338319 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:27:56.402799 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:27:56.414749 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:27:56.416771 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:27:56.420639 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:27:56.436812 ignition[911]: INFO : Ignition 2.19.0 Feb 13 20:27:56.436812 ignition[911]: INFO : Stage: mount Feb 13 20:27:56.438306 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:27:56.438306 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:27:56.438306 ignition[911]: INFO : mount: mount passed Feb 13 20:27:56.438306 ignition[911]: INFO : Ignition finished successfully Feb 13 20:27:56.437684 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:27:56.439400 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:27:56.445710 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:27:56.883035 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:27:56.893796 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:27:56.899692 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Feb 13 20:27:56.899721 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:27:56.902194 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:27:56.902248 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:27:56.904648 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:27:56.905872 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:27:56.931098 ignition[943]: INFO : Ignition 2.19.0 Feb 13 20:27:56.931098 ignition[943]: INFO : Stage: files Feb 13 20:27:56.932664 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:27:56.932664 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:27:56.934999 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:27:56.934999 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:27:56.934999 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:27:56.938896 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:27:56.938896 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:27:56.938896 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:27:56.938896 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:27:56.938896 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:27:56.938896 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:27:56.938896 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 20:27:56.936672 unknown[943]: wrote ssh authorized keys file for user: core Feb 13 20:27:56.981967 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:27:57.143564 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:27:57.143564 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:27:57.147325 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:27:57.147325 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:27:57.147325 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:27:57.147325 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:27:57.147325 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:27:57.147325 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:27:57.147325 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:27:57.147325 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:27:57.147325 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:27:57.147325 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:27:57.147325 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:27:57.147325 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:27:57.147325 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 20:27:57.350772 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:27:57.557020 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:27:57.557020 ignition[943]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 20:27:57.560638 ignition[943]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:27:57.560638 ignition[943]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:27:57.560638 ignition[943]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 20:27:57.560638 ignition[943]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 20:27:57.560638 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:27:57.560638 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:27:57.560638 ignition[943]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 20:27:57.560638 ignition[943]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Feb 13 20:27:57.560638 ignition[943]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:27:57.560638 ignition[943]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:27:57.560638 ignition[943]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Feb 13 20:27:57.560638 ignition[943]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 20:27:57.581606 ignition[943]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:27:57.585167 ignition[943]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:27:57.587872 ignition[943]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 20:27:57.587872 ignition[943]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:27:57.587872 ignition[943]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:27:57.587872 ignition[943]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:27:57.587872 ignition[943]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:27:57.587872 ignition[943]: INFO : files: files passed Feb 13 20:27:57.587872 ignition[943]: INFO : Ignition finished successfully Feb 13 20:27:57.588410 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:27:57.597753 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:27:57.599913 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:27:57.601348 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:27:57.601427 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:27:57.607942 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 20:27:57.611208 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:27:57.611208 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:27:57.614363 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:27:57.615764 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:27:57.617289 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:27:57.631829 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:27:57.649024 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:27:57.649123 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:27:57.651399 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:27:57.652500 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:27:57.654702 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:27:57.655400 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:27:57.671465 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:27:57.673906 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:27:57.684441 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:27:57.685724 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:27:57.687868 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:27:57.689732 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:27:57.689864 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:27:57.692398 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:27:57.693589 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:27:57.695580 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:27:57.697619 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:27:57.699569 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:27:57.701748 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:27:57.703773 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:27:57.705931 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:27:57.707793 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:27:57.709886 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:27:57.711524 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:27:57.711677 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:27:57.714132 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:27:57.715387 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:27:57.717400 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:27:57.720702 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:27:57.722906 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:27:57.723044 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:27:57.726136 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:27:57.726258 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:27:57.728427 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:27:57.730200 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:27:57.733686 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:27:57.735029 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:27:57.737277 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:27:57.738888 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:27:57.738977 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:27:57.740504 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:27:57.740586 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:27:57.742306 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:27:57.742413 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:27:57.744248 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:27:57.744345 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:27:57.752778 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:27:57.753738 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:27:57.753874 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:27:57.757113 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:27:57.758785 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:27:57.758928 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:27:57.760937 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:27:57.765350 ignition[998]: INFO : Ignition 2.19.0 Feb 13 20:27:57.765350 ignition[998]: INFO : Stage: umount Feb 13 20:27:57.761245 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:27:57.768608 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:27:57.768608 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:27:57.768608 ignition[998]: INFO : umount: umount passed Feb 13 20:27:57.768608 ignition[998]: INFO : Ignition finished successfully Feb 13 20:27:57.768046 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:27:57.769665 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:27:57.771820 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:27:57.772295 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:27:57.772388 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:27:57.774749 systemd[1]: Stopped target network.target - Network. Feb 13 20:27:57.780125 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:27:57.780202 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:27:57.782527 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:27:57.782579 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:27:57.785326 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:27:57.785368 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:27:57.787286 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:27:57.787330 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:27:57.789277 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:27:57.791018 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:27:57.795970 systemd-networkd[766]: eth0: DHCPv6 lease lost Feb 13 20:27:57.798511 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:27:57.798637 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:27:57.800892 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:27:57.801021 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:27:57.802911 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:27:57.802965 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:27:57.814746 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:27:57.815710 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:27:57.815772 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:27:57.817860 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:27:57.817905 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:27:57.819945 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:27:57.819992 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:27:57.822356 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:27:57.822402 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:27:57.824591 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:27:57.834102 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:27:57.834201 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:27:57.836087 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:27:57.836162 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:27:57.838084 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:27:57.838162 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:27:57.843220 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:27:57.843349 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:27:57.844872 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:27:57.844912 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:27:57.846501 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:27:57.846538 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:27:57.848592 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:27:57.848657 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:27:57.851320 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:27:57.851367 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:27:57.854085 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:27:57.854132 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:27:57.866798 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:27:57.867835 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:27:57.867895 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:27:57.870005 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:27:57.870050 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:27:57.872263 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:27:57.872369 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:27:57.874613 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:27:57.876483 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:27:57.887217 systemd[1]: Switching root. Feb 13 20:27:57.910601 systemd-journald[238]: Journal stopped Feb 13 20:27:58.688363 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 20:27:58.688417 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:27:58.688430 kernel: SELinux: policy capability open_perms=1 Feb 13 20:27:58.688443 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:27:58.688453 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:27:58.688463 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:27:58.688474 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:27:58.688488 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:27:58.688499 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:27:58.688509 kernel: audit: type=1403 audit(1739478478.130:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:27:58.688524 systemd[1]: Successfully loaded SELinux policy in 32.348ms. Feb 13 20:27:58.688542 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.664ms. Feb 13 20:27:58.688557 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:27:58.688568 systemd[1]: Detected virtualization kvm. Feb 13 20:27:58.688579 systemd[1]: Detected architecture arm64. Feb 13 20:27:58.688591 systemd[1]: Detected first boot. Feb 13 20:27:58.688604 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:27:58.688615 zram_generator::config[1065]: No configuration found. Feb 13 20:27:58.688645 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:27:58.688657 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:27:58.688669 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:27:58.688681 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:27:58.688693 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:27:58.688705 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:27:58.688719 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:27:58.688731 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:27:58.688743 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:27:58.688755 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:27:58.688767 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:27:58.688779 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:27:58.688798 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:27:58.688812 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:27:58.688824 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:27:58.688840 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:27:58.688852 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:27:58.688863 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 20:27:58.688874 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:27:58.688886 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:27:58.688902 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:27:58.688913 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:27:58.688925 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:27:58.688937 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:27:58.688949 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:27:58.688961 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:27:58.688972 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:27:58.688985 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:27:58.688997 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:27:58.689008 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:27:58.689019 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:27:58.689031 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:27:58.689044 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:27:58.689056 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:27:58.689067 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:27:58.689078 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:27:58.689089 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:27:58.689101 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:27:58.689112 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:27:58.689124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:27:58.689136 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:27:58.689150 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:27:58.689161 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:27:58.689172 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:27:58.689184 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:27:58.689196 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:27:58.689207 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:27:58.689219 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:27:58.689232 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 20:27:58.689246 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 20:27:58.689257 kernel: loop: module loaded Feb 13 20:27:58.689268 kernel: ACPI: bus type drm_connector registered Feb 13 20:27:58.689279 kernel: fuse: init (API version 7.39) Feb 13 20:27:58.689290 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:27:58.689302 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:27:58.689314 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:27:58.689326 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:27:58.689337 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:27:58.689375 systemd-journald[1143]: Collecting audit messages is disabled. Feb 13 20:27:58.689398 systemd-journald[1143]: Journal started Feb 13 20:27:58.689422 systemd-journald[1143]: Runtime Journal (/run/log/journal/d4a136a0c76c441683bb8e47e88a6ab2) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:27:58.693265 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:27:58.694297 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:27:58.695502 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:27:58.696927 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:27:58.698198 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:27:58.699559 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:27:58.700935 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:27:58.702264 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:27:58.703925 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:27:58.704098 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:27:58.705854 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:27:58.707668 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:27:58.707854 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:27:58.709471 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:27:58.709666 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:27:58.711112 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:27:58.711289 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:27:58.713078 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:27:58.713240 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:27:58.714747 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:27:58.714977 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:27:58.716477 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:27:58.718163 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:27:58.719984 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:27:58.732618 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:27:58.743730 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:27:58.746054 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:27:58.747242 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:27:58.769855 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:27:58.773383 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:27:58.774816 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:27:58.776229 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:27:58.777591 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:27:58.779195 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:27:58.783140 systemd-journald[1143]: Time spent on flushing to /var/log/journal/d4a136a0c76c441683bb8e47e88a6ab2 is 16.113ms for 845 entries. Feb 13 20:27:58.783140 systemd-journald[1143]: System Journal (/var/log/journal/d4a136a0c76c441683bb8e47e88a6ab2) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:27:58.807995 systemd-journald[1143]: Received client request to flush runtime journal. Feb 13 20:27:58.784662 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:27:58.792220 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:27:58.793827 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:27:58.795167 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:27:58.796754 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:27:58.799848 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:27:58.812362 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:27:58.814153 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:27:58.816462 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:27:58.822342 udevadm[1204]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:27:58.826740 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Feb 13 20:27:58.826758 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Feb 13 20:27:58.831199 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:27:58.839917 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:27:58.865644 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:27:58.873895 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:27:58.886233 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Feb 13 20:27:58.886255 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Feb 13 20:27:58.890273 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:27:59.242524 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:27:59.253784 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:27:59.273603 systemd-udevd[1224]: Using default interface naming scheme 'v255'. Feb 13 20:27:59.287382 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:27:59.298811 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:27:59.311833 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:27:59.313558 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Feb 13 20:27:59.351549 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1237) Feb 13 20:27:59.374910 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:27:59.388921 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:27:59.445926 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:27:59.456999 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:27:59.460866 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:27:59.464349 systemd-networkd[1234]: lo: Link UP Feb 13 20:27:59.464360 systemd-networkd[1234]: lo: Gained carrier Feb 13 20:27:59.465155 systemd-networkd[1234]: Enumeration completed Feb 13 20:27:59.465320 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:27:59.465586 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:27:59.465589 systemd-networkd[1234]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:27:59.466196 systemd-networkd[1234]: eth0: Link UP Feb 13 20:27:59.466207 systemd-networkd[1234]: eth0: Gained carrier Feb 13 20:27:59.466218 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:27:59.468480 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:27:59.475494 lvm[1261]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:27:59.486721 systemd-networkd[1234]: eth0: DHCPv4 address 10.0.0.9/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:27:59.489877 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:27:59.501285 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:27:59.502888 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:27:59.523963 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:27:59.527946 lvm[1270]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:27:59.560208 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:27:59.561743 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:27:59.563104 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:27:59.563153 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:27:59.564199 systemd[1]: Reached target machines.target - Containers. Feb 13 20:27:59.566896 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:27:59.580788 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:27:59.583206 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:27:59.584478 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:27:59.585792 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:27:59.589657 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:27:59.594765 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:27:59.599087 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:27:59.608663 kernel: loop0: detected capacity change from 0 to 114328 Feb 13 20:27:59.617559 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:27:59.618484 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:27:59.621654 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:27:59.621906 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:27:59.653646 kernel: loop1: detected capacity change from 0 to 194096 Feb 13 20:27:59.689675 kernel: loop2: detected capacity change from 0 to 114432 Feb 13 20:27:59.731661 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 20:27:59.736927 kernel: loop4: detected capacity change from 0 to 194096 Feb 13 20:27:59.743655 kernel: loop5: detected capacity change from 0 to 114432 Feb 13 20:27:59.747331 (sd-merge)[1292]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 20:27:59.747761 (sd-merge)[1292]: Merged extensions into '/usr'. Feb 13 20:27:59.752901 systemd[1]: Reloading requested from client PID 1278 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:27:59.752919 systemd[1]: Reloading... Feb 13 20:27:59.800655 zram_generator::config[1320]: No configuration found. Feb 13 20:27:59.856510 ldconfig[1274]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:27:59.910748 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:27:59.955250 systemd[1]: Reloading finished in 201 ms. Feb 13 20:27:59.969784 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:27:59.971480 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:27:59.988810 systemd[1]: Starting ensure-sysext.service... Feb 13 20:27:59.991257 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:27:59.995427 systemd[1]: Reloading requested from client PID 1361 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:27:59.995534 systemd[1]: Reloading... Feb 13 20:28:00.009244 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:28:00.009520 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:28:00.010230 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:28:00.010450 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Feb 13 20:28:00.010496 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Feb 13 20:28:00.012836 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:28:00.012849 systemd-tmpfiles[1362]: Skipping /boot Feb 13 20:28:00.020206 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:28:00.020222 systemd-tmpfiles[1362]: Skipping /boot Feb 13 20:28:00.040725 zram_generator::config[1390]: No configuration found. Feb 13 20:28:00.134601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:28:00.178016 systemd[1]: Reloading finished in 182 ms. Feb 13 20:28:00.193446 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:28:00.204548 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:28:00.206983 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:28:00.209469 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:28:00.213320 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:28:00.216741 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:28:00.225387 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:28:00.231221 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:28:00.236907 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:28:00.240197 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:28:00.241865 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:28:00.242988 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:28:00.246464 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:28:00.246614 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:28:00.248173 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:28:00.248312 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:28:00.253429 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:28:00.253640 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:28:00.257279 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:28:00.264793 augenrules[1467]: No rules Feb 13 20:28:00.268833 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:28:00.271888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:28:00.273091 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:28:00.274581 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:28:00.276541 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:28:00.278195 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:28:00.280022 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:28:00.281813 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:28:00.281965 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:28:00.283701 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:28:00.283957 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:28:00.291934 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:28:00.294636 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:28:00.303876 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:28:00.304942 systemd-resolved[1437]: Positive Trust Anchors: Feb 13 20:28:00.306104 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:28:00.306704 systemd-resolved[1437]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:28:00.306736 systemd-resolved[1437]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:28:00.308930 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:28:00.311904 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:28:00.312829 systemd-resolved[1437]: Defaulting to hostname 'linux'. Feb 13 20:28:00.314899 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:28:00.315061 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:28:00.316019 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:28:00.316180 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:28:00.317747 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:28:00.319309 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:28:00.319461 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:28:00.321066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:28:00.321208 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:28:00.322977 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:28:00.323183 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:28:00.327062 systemd[1]: Finished ensure-sysext.service. Feb 13 20:28:00.330847 systemd[1]: Reached target network.target - Network. Feb 13 20:28:00.332069 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:28:00.333301 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:28:00.333379 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:28:00.344844 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:28:00.385469 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:28:00.387078 systemd-timesyncd[1501]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 20:28:00.387134 systemd-timesyncd[1501]: Initial clock synchronization to Thu 2025-02-13 20:28:00.295155 UTC. Feb 13 20:28:00.387276 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:28:00.388493 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:28:00.389797 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:28:00.391103 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:28:00.392351 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:28:00.392387 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:28:00.393341 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:28:00.394483 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:28:00.395660 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:28:00.396852 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:28:00.398453 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:28:00.401064 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:28:00.403277 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:28:00.407618 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:28:00.408687 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:28:00.409640 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:28:00.410721 systemd[1]: System is tainted: cgroupsv1 Feb 13 20:28:00.410781 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:28:00.410802 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:28:00.411849 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:28:00.413877 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:28:00.416749 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:28:00.420836 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:28:00.421810 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:28:00.425788 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:28:00.431742 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:28:00.435007 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:28:00.436812 jq[1507]: false Feb 13 20:28:00.446845 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:28:00.450507 extend-filesystems[1509]: Found loop3 Feb 13 20:28:00.450507 extend-filesystems[1509]: Found loop4 Feb 13 20:28:00.450507 extend-filesystems[1509]: Found loop5 Feb 13 20:28:00.450507 extend-filesystems[1509]: Found vda Feb 13 20:28:00.450507 extend-filesystems[1509]: Found vda1 Feb 13 20:28:00.450507 extend-filesystems[1509]: Found vda2 Feb 13 20:28:00.450507 extend-filesystems[1509]: Found vda3 Feb 13 20:28:00.450507 extend-filesystems[1509]: Found usr Feb 13 20:28:00.450507 extend-filesystems[1509]: Found vda4 Feb 13 20:28:00.450507 extend-filesystems[1509]: Found vda6 Feb 13 20:28:00.450507 extend-filesystems[1509]: Found vda7 Feb 13 20:28:00.450507 extend-filesystems[1509]: Found vda9 Feb 13 20:28:00.450507 extend-filesystems[1509]: Checking size of /dev/vda9 Feb 13 20:28:00.464206 dbus-daemon[1506]: [system] SELinux support is enabled Feb 13 20:28:00.452797 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:28:00.471357 extend-filesystems[1509]: Resized partition /dev/vda9 Feb 13 20:28:00.459265 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:28:00.462805 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:28:00.465255 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:28:00.466869 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:28:00.482886 extend-filesystems[1534]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:28:00.483337 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:28:00.483551 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:28:00.483839 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:28:00.484027 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:28:00.488715 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1242) Feb 13 20:28:00.488768 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 20:28:00.488795 jq[1533]: true Feb 13 20:28:00.490074 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:28:00.490288 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:28:00.533207 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 20:28:00.514047 (ntainerd)[1543]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:28:00.533528 tar[1537]: linux-arm64/helm Feb 13 20:28:00.533708 update_engine[1530]: I20250213 20:28:00.532850 1530 main.cc:92] Flatcar Update Engine starting Feb 13 20:28:00.529089 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:28:00.540301 extend-filesystems[1534]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:28:00.540301 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:28:00.540301 extend-filesystems[1534]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 20:28:00.544934 update_engine[1530]: I20250213 20:28:00.539846 1530 update_check_scheduler.cc:74] Next update check in 2m13s Feb 13 20:28:00.544967 jq[1542]: true Feb 13 20:28:00.529124 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:28:00.545257 extend-filesystems[1509]: Resized filesystem in /dev/vda9 Feb 13 20:28:00.530528 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:28:00.530543 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:28:00.538036 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:28:00.538264 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:28:00.543250 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:28:00.552316 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:28:00.564341 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:28:00.566738 systemd-logind[1523]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 20:28:00.566995 systemd-logind[1523]: New seat seat0. Feb 13 20:28:00.568417 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:28:00.579734 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:28:00.605423 bash[1571]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:28:00.607669 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:28:00.609694 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:28:00.620176 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:28:00.629716 locksmithd[1557]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:28:00.637062 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:28:00.643340 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:28:00.643595 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:28:00.646877 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:28:00.659209 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:28:00.669915 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:28:00.675138 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 20:28:00.676831 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:28:00.746084 containerd[1543]: time="2025-02-13T20:28:00.745980120Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:28:00.769547 containerd[1543]: time="2025-02-13T20:28:00.769477360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:00.772317 containerd[1543]: time="2025-02-13T20:28:00.771117440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:28:00.772317 containerd[1543]: time="2025-02-13T20:28:00.771154800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:28:00.772317 containerd[1543]: time="2025-02-13T20:28:00.771171880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:28:00.772317 containerd[1543]: time="2025-02-13T20:28:00.771323200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:28:00.772317 containerd[1543]: time="2025-02-13T20:28:00.771340520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:00.772317 containerd[1543]: time="2025-02-13T20:28:00.771390680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:28:00.772317 containerd[1543]: time="2025-02-13T20:28:00.771402880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:00.772317 containerd[1543]: time="2025-02-13T20:28:00.771602320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:28:00.772317 containerd[1543]: time="2025-02-13T20:28:00.771618320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:00.772317 containerd[1543]: time="2025-02-13T20:28:00.771657440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:28:00.772317 containerd[1543]: time="2025-02-13T20:28:00.771668640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:00.772598 containerd[1543]: time="2025-02-13T20:28:00.771740600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:00.772598 containerd[1543]: time="2025-02-13T20:28:00.771932400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:00.772598 containerd[1543]: time="2025-02-13T20:28:00.772059960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:28:00.772598 containerd[1543]: time="2025-02-13T20:28:00.772075000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:28:00.772598 containerd[1543]: time="2025-02-13T20:28:00.772142520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:28:00.772598 containerd[1543]: time="2025-02-13T20:28:00.772182080Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:28:00.775769 containerd[1543]: time="2025-02-13T20:28:00.775733400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:28:00.775880 containerd[1543]: time="2025-02-13T20:28:00.775864560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:28:00.775955 containerd[1543]: time="2025-02-13T20:28:00.775941880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:28:00.776020 containerd[1543]: time="2025-02-13T20:28:00.776006680Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:28:00.776074 containerd[1543]: time="2025-02-13T20:28:00.776062920Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:28:00.776266 containerd[1543]: time="2025-02-13T20:28:00.776247760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:28:00.776743 containerd[1543]: time="2025-02-13T20:28:00.776713720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:28:00.776887 containerd[1543]: time="2025-02-13T20:28:00.776867840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:28:00.776911 containerd[1543]: time="2025-02-13T20:28:00.776891680Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:28:00.776911 containerd[1543]: time="2025-02-13T20:28:00.776906320Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:28:00.776971 containerd[1543]: time="2025-02-13T20:28:00.776922280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:28:00.776971 containerd[1543]: time="2025-02-13T20:28:00.776944160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:28:00.776971 containerd[1543]: time="2025-02-13T20:28:00.776957240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:28:00.777024 containerd[1543]: time="2025-02-13T20:28:00.776970800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:28:00.777024 containerd[1543]: time="2025-02-13T20:28:00.776985320Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:28:00.777024 containerd[1543]: time="2025-02-13T20:28:00.776998040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:28:00.777024 containerd[1543]: time="2025-02-13T20:28:00.777010600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:28:00.777989 containerd[1543]: time="2025-02-13T20:28:00.777179320Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:28:00.777989 containerd[1543]: time="2025-02-13T20:28:00.777291600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:28:00.777989 containerd[1543]: time="2025-02-13T20:28:00.777339280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:28:00.777989 containerd[1543]: time="2025-02-13T20:28:00.777355960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:28:00.777989 containerd[1543]: time="2025-02-13T20:28:00.777375920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:28:00.777989 containerd[1543]: time="2025-02-13T20:28:00.777394520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:28:00.777989 containerd[1543]: time="2025-02-13T20:28:00.777411920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:28:00.777989 containerd[1543]: time="2025-02-13T20:28:00.777428280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:28:00.777989 containerd[1543]: time="2025-02-13T20:28:00.777452480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:28:00.777989 containerd[1543]: time="2025-02-13T20:28:00.777471560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:28:00.777989 containerd[1543]: time="2025-02-13T20:28:00.777496840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:28:00.777989 containerd[1543]: time="2025-02-13T20:28:00.777510880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:28:00.777989 containerd[1543]: time="2025-02-13T20:28:00.777528360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:28:00.777989 containerd[1543]: time="2025-02-13T20:28:00.777545520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:28:00.777989 containerd[1543]: time="2025-02-13T20:28:00.777572680Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:28:00.778309 containerd[1543]: time="2025-02-13T20:28:00.777607640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:28:00.778309 containerd[1543]: time="2025-02-13T20:28:00.777666360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:28:00.778309 containerd[1543]: time="2025-02-13T20:28:00.777690680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:28:00.778309 containerd[1543]: time="2025-02-13T20:28:00.777967600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:28:00.778309 containerd[1543]: time="2025-02-13T20:28:00.778007360Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:28:00.778309 containerd[1543]: time="2025-02-13T20:28:00.778022280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:28:00.778309 containerd[1543]: time="2025-02-13T20:28:00.778041440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:28:00.778309 containerd[1543]: time="2025-02-13T20:28:00.778277040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:28:00.778309 containerd[1543]: time="2025-02-13T20:28:00.778296880Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:28:00.778309 containerd[1543]: time="2025-02-13T20:28:00.778308560Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:28:00.778495 containerd[1543]: time="2025-02-13T20:28:00.778324600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:28:00.779025 containerd[1543]: time="2025-02-13T20:28:00.778958360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:28:00.779025 containerd[1543]: time="2025-02-13T20:28:00.779029200Z" level=info msg="Connect containerd service" Feb 13 20:28:00.779175 containerd[1543]: time="2025-02-13T20:28:00.779087320Z" level=info msg="using legacy CRI server" Feb 13 20:28:00.779175 containerd[1543]: time="2025-02-13T20:28:00.779096360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:28:00.779218 containerd[1543]: time="2025-02-13T20:28:00.779179800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:28:00.779978 containerd[1543]: time="2025-02-13T20:28:00.779939720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:28:00.780409 containerd[1543]: time="2025-02-13T20:28:00.780284400Z" level=info msg="Start subscribing containerd event" Feb 13 20:28:00.780409 containerd[1543]: time="2025-02-13T20:28:00.780341720Z" level=info msg="Start recovering state" Feb 13 20:28:00.780535 containerd[1543]: time="2025-02-13T20:28:00.780519440Z" level=info msg="Start event monitor" Feb 13 20:28:00.780596 containerd[1543]: time="2025-02-13T20:28:00.780583080Z" level=info msg="Start snapshots syncer" Feb 13 20:28:00.780695 containerd[1543]: time="2025-02-13T20:28:00.780682480Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:28:00.780913 containerd[1543]: time="2025-02-13T20:28:00.780896520Z" level=info msg="Start streaming server" Feb 13 20:28:00.781056 containerd[1543]: time="2025-02-13T20:28:00.780613280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:28:00.781202 containerd[1543]: time="2025-02-13T20:28:00.781185400Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:28:00.781322 containerd[1543]: time="2025-02-13T20:28:00.781307360Z" level=info msg="containerd successfully booted in 0.036932s" Feb 13 20:28:00.781429 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:28:00.902538 tar[1537]: linux-arm64/LICENSE Feb 13 20:28:00.902699 tar[1537]: linux-arm64/README.md Feb 13 20:28:00.912823 systemd-networkd[1234]: eth0: Gained IPv6LL Feb 13 20:28:00.914616 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:28:00.916221 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:28:00.918596 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:28:00.921207 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 20:28:00.923802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:00.925897 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:28:00.948301 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:28:00.949928 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 20:28:00.950149 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 20:28:00.952651 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:28:01.405969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:01.407517 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:28:01.409716 (kubelet)[1643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:28:01.411843 systemd[1]: Startup finished in 5.022s (kernel) + 3.316s (userspace) = 8.338s. Feb 13 20:28:01.876182 kubelet[1643]: E0213 20:28:01.876035 1643 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:28:01.878704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:28:01.878895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:28:06.972280 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:28:06.984846 systemd[1]: Started sshd@0-10.0.0.9:22-10.0.0.1:35010.service - OpenSSH per-connection server daemon (10.0.0.1:35010). Feb 13 20:28:07.041579 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 35010 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:28:07.042707 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:07.056491 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:28:07.067930 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:28:07.069760 systemd-logind[1523]: New session 1 of user core. Feb 13 20:28:07.077441 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:28:07.091963 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:28:07.094614 (systemd)[1664]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:28:07.164878 systemd[1664]: Queued start job for default target default.target. Feb 13 20:28:07.165612 systemd[1664]: Created slice app.slice - User Application Slice. Feb 13 20:28:07.165667 systemd[1664]: Reached target paths.target - Paths. Feb 13 20:28:07.165679 systemd[1664]: Reached target timers.target - Timers. Feb 13 20:28:07.173719 systemd[1664]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:28:07.179149 systemd[1664]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:28:07.179209 systemd[1664]: Reached target sockets.target - Sockets. Feb 13 20:28:07.179220 systemd[1664]: Reached target basic.target - Basic System. Feb 13 20:28:07.179255 systemd[1664]: Reached target default.target - Main User Target. Feb 13 20:28:07.179279 systemd[1664]: Startup finished in 79ms. Feb 13 20:28:07.179638 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:28:07.180939 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:28:07.238874 systemd[1]: Started sshd@1-10.0.0.9:22-10.0.0.1:35012.service - OpenSSH per-connection server daemon (10.0.0.1:35012). Feb 13 20:28:07.273720 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 35012 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:28:07.275068 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:07.279580 systemd-logind[1523]: New session 2 of user core. Feb 13 20:28:07.286923 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:28:07.338206 sshd[1676]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:07.354915 systemd[1]: Started sshd@2-10.0.0.9:22-10.0.0.1:35026.service - OpenSSH per-connection server daemon (10.0.0.1:35026). Feb 13 20:28:07.355292 systemd[1]: sshd@1-10.0.0.9:22-10.0.0.1:35012.service: Deactivated successfully. Feb 13 20:28:07.357549 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:28:07.357968 systemd-logind[1523]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:28:07.358907 systemd-logind[1523]: Removed session 2. Feb 13 20:28:07.389615 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 35026 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:28:07.390905 sshd[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:07.394656 systemd-logind[1523]: New session 3 of user core. Feb 13 20:28:07.410845 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:28:07.457638 sshd[1681]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:07.467864 systemd[1]: Started sshd@3-10.0.0.9:22-10.0.0.1:35038.service - OpenSSH per-connection server daemon (10.0.0.1:35038). Feb 13 20:28:07.468260 systemd[1]: sshd@2-10.0.0.9:22-10.0.0.1:35026.service: Deactivated successfully. Feb 13 20:28:07.470041 systemd-logind[1523]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:28:07.470543 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:28:07.472024 systemd-logind[1523]: Removed session 3. Feb 13 20:28:07.502570 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 35038 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:28:07.504144 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:07.508251 systemd-logind[1523]: New session 4 of user core. Feb 13 20:28:07.515849 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:28:07.567007 sshd[1689]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:07.575863 systemd[1]: Started sshd@4-10.0.0.9:22-10.0.0.1:35052.service - OpenSSH per-connection server daemon (10.0.0.1:35052). Feb 13 20:28:07.576222 systemd[1]: sshd@3-10.0.0.9:22-10.0.0.1:35038.service: Deactivated successfully. Feb 13 20:28:07.578599 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:28:07.578670 systemd-logind[1523]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:28:07.579970 systemd-logind[1523]: Removed session 4. Feb 13 20:28:07.611004 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 35052 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:28:07.612144 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:07.616423 systemd-logind[1523]: New session 5 of user core. Feb 13 20:28:07.628852 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:28:07.693856 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:28:07.694133 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:28:08.058976 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:28:08.059091 (dockerd)[1722]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:28:08.338765 dockerd[1722]: time="2025-02-13T20:28:08.338486416Z" level=info msg="Starting up" Feb 13 20:28:08.579021 dockerd[1722]: time="2025-02-13T20:28:08.578741120Z" level=info msg="Loading containers: start." Feb 13 20:28:08.658652 kernel: Initializing XFRM netlink socket Feb 13 20:28:08.717229 systemd-networkd[1234]: docker0: Link UP Feb 13 20:28:08.735038 dockerd[1722]: time="2025-02-13T20:28:08.734993512Z" level=info msg="Loading containers: done." Feb 13 20:28:08.746004 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2240533941-merged.mount: Deactivated successfully. Feb 13 20:28:08.747197 dockerd[1722]: time="2025-02-13T20:28:08.747160066Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:28:08.747279 dockerd[1722]: time="2025-02-13T20:28:08.747250975Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:28:08.747377 dockerd[1722]: time="2025-02-13T20:28:08.747358329Z" level=info msg="Daemon has completed initialization" Feb 13 20:28:08.776705 dockerd[1722]: time="2025-02-13T20:28:08.776566486Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:28:08.776855 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:28:09.487297 containerd[1543]: time="2025-02-13T20:28:09.487253683Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 20:28:10.004691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3463116318.mount: Deactivated successfully. Feb 13 20:28:10.869066 containerd[1543]: time="2025-02-13T20:28:10.869022420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:10.870096 containerd[1543]: time="2025-02-13T20:28:10.870049344Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 20:28:10.871282 containerd[1543]: time="2025-02-13T20:28:10.871224515Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:10.873969 containerd[1543]: time="2025-02-13T20:28:10.873933734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:10.875220 containerd[1543]: time="2025-02-13T20:28:10.875172883Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 1.387875089s" Feb 13 20:28:10.875220 containerd[1543]: time="2025-02-13T20:28:10.875209994Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 20:28:10.893316 containerd[1543]: time="2025-02-13T20:28:10.893227763Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 20:28:12.129146 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:28:12.138798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:12.226821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:12.230474 (kubelet)[1948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:28:12.285640 kubelet[1948]: E0213 20:28:12.285574 1948 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:28:12.288569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:28:12.288795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:28:13.050538 containerd[1543]: time="2025-02-13T20:28:13.050487054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:13.051533 containerd[1543]: time="2025-02-13T20:28:13.051009284Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 20:28:13.052307 containerd[1543]: time="2025-02-13T20:28:13.052265334Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:13.055344 containerd[1543]: time="2025-02-13T20:28:13.055285975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:13.057601 containerd[1543]: time="2025-02-13T20:28:13.057558070Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 2.164288877s" Feb 13 20:28:13.057601 containerd[1543]: time="2025-02-13T20:28:13.057598057Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 20:28:13.077615 containerd[1543]: time="2025-02-13T20:28:13.077520215Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 20:28:14.313580 containerd[1543]: time="2025-02-13T20:28:14.313521903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:14.314060 containerd[1543]: time="2025-02-13T20:28:14.314018776Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 20:28:14.314865 containerd[1543]: time="2025-02-13T20:28:14.314836598Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:14.317790 containerd[1543]: time="2025-02-13T20:28:14.317740392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:14.319079 containerd[1543]: time="2025-02-13T20:28:14.319009700Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.241445743s" Feb 13 20:28:14.319079 containerd[1543]: time="2025-02-13T20:28:14.319048102Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 20:28:14.337551 containerd[1543]: time="2025-02-13T20:28:14.337480860Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:28:15.544816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1991281731.mount: Deactivated successfully. Feb 13 20:28:15.909583 containerd[1543]: time="2025-02-13T20:28:15.909447221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:15.923398 containerd[1543]: time="2025-02-13T20:28:15.923332721Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 20:28:15.936782 containerd[1543]: time="2025-02-13T20:28:15.936735556Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:15.951120 containerd[1543]: time="2025-02-13T20:28:15.951013041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:15.951831 containerd[1543]: time="2025-02-13T20:28:15.951681416Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.614118121s" Feb 13 20:28:15.951831 containerd[1543]: time="2025-02-13T20:28:15.951714437Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 20:28:15.969818 containerd[1543]: time="2025-02-13T20:28:15.969757565Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:28:16.602645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1109131364.mount: Deactivated successfully. Feb 13 20:28:17.322178 containerd[1543]: time="2025-02-13T20:28:17.322130724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:17.323230 containerd[1543]: time="2025-02-13T20:28:17.322980731Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 20:28:17.324241 containerd[1543]: time="2025-02-13T20:28:17.324165004Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:17.327184 containerd[1543]: time="2025-02-13T20:28:17.327152789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:17.329439 containerd[1543]: time="2025-02-13T20:28:17.329395626Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.359597768s" Feb 13 20:28:17.329439 containerd[1543]: time="2025-02-13T20:28:17.329436491Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 20:28:17.347020 containerd[1543]: time="2025-02-13T20:28:17.346981363Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 20:28:17.787735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3340308284.mount: Deactivated successfully. Feb 13 20:28:17.791808 containerd[1543]: time="2025-02-13T20:28:17.791765568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:17.792658 containerd[1543]: time="2025-02-13T20:28:17.792592925Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 20:28:17.793258 containerd[1543]: time="2025-02-13T20:28:17.793211486Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:17.795710 containerd[1543]: time="2025-02-13T20:28:17.795652733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:17.796639 containerd[1543]: time="2025-02-13T20:28:17.796481768Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 449.463256ms" Feb 13 20:28:17.796639 containerd[1543]: time="2025-02-13T20:28:17.796513206Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 20:28:17.813993 containerd[1543]: time="2025-02-13T20:28:17.813938361Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 20:28:18.444027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2574860682.mount: Deactivated successfully. Feb 13 20:28:21.609669 containerd[1543]: time="2025-02-13T20:28:21.609560606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:21.610284 containerd[1543]: time="2025-02-13T20:28:21.610243184Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 20:28:21.611057 containerd[1543]: time="2025-02-13T20:28:21.611026121Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:21.614242 containerd[1543]: time="2025-02-13T20:28:21.614196281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:21.615665 containerd[1543]: time="2025-02-13T20:28:21.615496647Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.801473716s" Feb 13 20:28:21.615665 containerd[1543]: time="2025-02-13T20:28:21.615532499Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 20:28:22.440352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:28:22.453840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:22.661918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:22.663921 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:28:22.700728 kubelet[2185]: E0213 20:28:22.700607 2185 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:28:22.702852 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:28:22.702975 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:28:26.129527 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:26.143855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:26.162241 systemd[1]: Reloading requested from client PID 2202 ('systemctl') (unit session-5.scope)... Feb 13 20:28:26.162362 systemd[1]: Reloading... Feb 13 20:28:26.215658 zram_generator::config[2242]: No configuration found. Feb 13 20:28:26.481049 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:28:26.529795 systemd[1]: Reloading finished in 367 ms. Feb 13 20:28:26.570371 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:28:26.570448 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:28:26.570818 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:26.572869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:26.658187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:26.663049 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:28:26.703894 kubelet[2298]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:28:26.704940 kubelet[2298]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:28:26.704940 kubelet[2298]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:28:26.704940 kubelet[2298]: I0213 20:28:26.704042 2298 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:28:27.498894 kubelet[2298]: I0213 20:28:27.498847 2298 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:28:27.498894 kubelet[2298]: I0213 20:28:27.498881 2298 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:28:27.499093 kubelet[2298]: I0213 20:28:27.499080 2298 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:28:27.520780 kubelet[2298]: E0213 20:28:27.520740 2298 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:28:27.520897 kubelet[2298]: I0213 20:28:27.520854 2298 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:28:27.530039 kubelet[2298]: I0213 20:28:27.530006 2298 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:28:27.530569 kubelet[2298]: I0213 20:28:27.530535 2298 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:28:27.530757 kubelet[2298]: I0213 20:28:27.530564 2298 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:28:27.530836 kubelet[2298]: I0213 20:28:27.530827 2298 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:28:27.530869 kubelet[2298]: I0213 20:28:27.530836 2298 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:28:27.531104 kubelet[2298]: I0213 20:28:27.531082 2298 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:28:27.533847 kubelet[2298]: I0213 20:28:27.533822 2298 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:28:27.533889 kubelet[2298]: I0213 20:28:27.533851 2298 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:28:27.534153 kubelet[2298]: I0213 20:28:27.534141 2298 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:28:27.534501 kubelet[2298]: I0213 20:28:27.534290 2298 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:28:27.534555 kubelet[2298]: W0213 20:28:27.534484 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:28:27.534555 kubelet[2298]: E0213 20:28:27.534544 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:28:27.534931 kubelet[2298]: W0213 20:28:27.534900 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:28:27.534968 kubelet[2298]: E0213 20:28:27.534938 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:28:27.539429 kubelet[2298]: I0213 20:28:27.537859 2298 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:28:27.539429 kubelet[2298]: I0213 20:28:27.538234 2298 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:28:27.539429 kubelet[2298]: W0213 20:28:27.538405 2298 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:28:27.539429 kubelet[2298]: I0213 20:28:27.539249 2298 server.go:1264] "Started kubelet" Feb 13 20:28:27.541769 kubelet[2298]: I0213 20:28:27.541722 2298 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:28:27.541903 kubelet[2298]: I0213 20:28:27.541885 2298 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:28:27.542064 kubelet[2298]: I0213 20:28:27.542024 2298 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:28:27.542266 kubelet[2298]: I0213 20:28:27.542246 2298 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:28:27.542995 kubelet[2298]: I0213 20:28:27.542915 2298 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:28:27.548220 kubelet[2298]: I0213 20:28:27.544748 2298 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:28:27.549217 kubelet[2298]: I0213 20:28:27.548912 2298 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:28:27.549217 kubelet[2298]: I0213 20:28:27.549016 2298 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:28:27.549217 kubelet[2298]: I0213 20:28:27.549098 2298 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:28:27.549509 kubelet[2298]: W0213 20:28:27.549474 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:28:27.549596 kubelet[2298]: E0213 20:28:27.549584 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:28:27.549668 kubelet[2298]: I0213 20:28:27.549503 2298 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:28:27.550662 kubelet[2298]: E0213 20:28:27.550612 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="200ms" Feb 13 20:28:27.552850 kubelet[2298]: I0213 20:28:27.552794 2298 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:28:27.561338 kubelet[2298]: E0213 20:28:27.560548 2298 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.9:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.9:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823de7c884d5b4f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:28:27.539225423 +0000 UTC m=+0.870387527,LastTimestamp:2025-02-13 20:28:27.539225423 +0000 UTC m=+0.870387527,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:28:27.564665 kubelet[2298]: E0213 20:28:27.564031 2298 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:28:27.567515 kubelet[2298]: I0213 20:28:27.567354 2298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:28:27.568899 kubelet[2298]: I0213 20:28:27.568436 2298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:28:27.568899 kubelet[2298]: I0213 20:28:27.568598 2298 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:28:27.568899 kubelet[2298]: I0213 20:28:27.568628 2298 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:28:27.568899 kubelet[2298]: E0213 20:28:27.568683 2298 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:28:27.569059 kubelet[2298]: W0213 20:28:27.568929 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:28:27.569059 kubelet[2298]: E0213 20:28:27.568957 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:28:27.573388 kubelet[2298]: I0213 20:28:27.573370 2298 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:28:27.573526 kubelet[2298]: I0213 20:28:27.573515 2298 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:28:27.573588 kubelet[2298]: I0213 20:28:27.573580 2298 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:28:27.586345 kubelet[2298]: I0213 20:28:27.586317 2298 policy_none.go:49] "None policy: Start" Feb 13 20:28:27.587239 kubelet[2298]: I0213 20:28:27.587208 2298 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:28:27.587239 kubelet[2298]: I0213 20:28:27.587235 2298 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:28:27.592392 kubelet[2298]: I0213 20:28:27.592367 2298 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:28:27.593034 kubelet[2298]: I0213 20:28:27.592683 2298 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:28:27.593034 kubelet[2298]: I0213 20:28:27.592790 2298 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:28:27.594132 kubelet[2298]: E0213 20:28:27.594110 2298 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 20:28:27.644618 kubelet[2298]: I0213 20:28:27.644578 2298 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:28:27.645134 kubelet[2298]: E0213 20:28:27.645081 2298 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Feb 13 20:28:27.669474 kubelet[2298]: I0213 20:28:27.669409 2298 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 20:28:27.670631 kubelet[2298]: I0213 20:28:27.670584 2298 topology_manager.go:215] "Topology Admit Handler" podUID="30b89b75cae84ccf9ce2cd85783b1230" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 20:28:27.671698 kubelet[2298]: I0213 20:28:27.671381 2298 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 20:28:27.750032 kubelet[2298]: I0213 20:28:27.749932 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:27.750032 kubelet[2298]: I0213 20:28:27.749976 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:27.750032 kubelet[2298]: I0213 20:28:27.750000 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:27.750032 kubelet[2298]: I0213 20:28:27.750020 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:28:27.750032 kubelet[2298]: I0213 20:28:27.750035 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30b89b75cae84ccf9ce2cd85783b1230-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"30b89b75cae84ccf9ce2cd85783b1230\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:27.750471 kubelet[2298]: I0213 20:28:27.750050 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30b89b75cae84ccf9ce2cd85783b1230-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"30b89b75cae84ccf9ce2cd85783b1230\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:27.750471 kubelet[2298]: I0213 20:28:27.750065 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:27.750471 kubelet[2298]: I0213 20:28:27.750080 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:27.750471 kubelet[2298]: I0213 20:28:27.750099 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30b89b75cae84ccf9ce2cd85783b1230-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"30b89b75cae84ccf9ce2cd85783b1230\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:27.751180 kubelet[2298]: E0213 20:28:27.751123 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="400ms" Feb 13 20:28:27.846572 kubelet[2298]: I0213 20:28:27.846545 2298 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:28:27.846917 kubelet[2298]: E0213 20:28:27.846886 2298 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Feb 13 20:28:27.975670 kubelet[2298]: E0213 20:28:27.975589 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:27.976026 kubelet[2298]: E0213 20:28:27.975985 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:27.976367 containerd[1543]: time="2025-02-13T20:28:27.976321863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 20:28:27.976727 containerd[1543]: time="2025-02-13T20:28:27.976334059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:30b89b75cae84ccf9ce2cd85783b1230,Namespace:kube-system,Attempt:0,}" Feb 13 20:28:27.978549 kubelet[2298]: E0213 20:28:27.978522 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:27.978851 containerd[1543]: time="2025-02-13T20:28:27.978816733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 20:28:28.152244 kubelet[2298]: E0213 20:28:28.152151 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="800ms" Feb 13 20:28:28.248401 kubelet[2298]: I0213 20:28:28.248375 2298 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:28:28.248707 kubelet[2298]: E0213 20:28:28.248681 2298 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Feb 13 20:28:28.406985 kubelet[2298]: W0213 20:28:28.406868 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:28:28.406985 kubelet[2298]: E0213 20:28:28.406931 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:28:28.611730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3802028157.mount: Deactivated successfully. Feb 13 20:28:28.616266 containerd[1543]: time="2025-02-13T20:28:28.616224448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:28:28.617064 containerd[1543]: time="2025-02-13T20:28:28.617040994Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 20:28:28.617721 containerd[1543]: time="2025-02-13T20:28:28.617674636Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:28:28.618911 containerd[1543]: time="2025-02-13T20:28:28.618880939Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:28:28.619485 containerd[1543]: time="2025-02-13T20:28:28.619450522Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:28:28.619605 containerd[1543]: time="2025-02-13T20:28:28.619586319Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:28:28.620147 containerd[1543]: time="2025-02-13T20:28:28.620126351Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:28:28.622673 containerd[1543]: time="2025-02-13T20:28:28.622642726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:28:28.623935 containerd[1543]: time="2025-02-13T20:28:28.623902453Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 647.49466ms" Feb 13 20:28:28.624759 containerd[1543]: time="2025-02-13T20:28:28.624734353Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 645.860399ms" Feb 13 20:28:28.627152 containerd[1543]: time="2025-02-13T20:28:28.627124327Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 650.614971ms" Feb 13 20:28:28.755075 containerd[1543]: time="2025-02-13T20:28:28.754941724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:28:28.755184 containerd[1543]: time="2025-02-13T20:28:28.755100314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:28:28.755184 containerd[1543]: time="2025-02-13T20:28:28.755144740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:28.755371 containerd[1543]: time="2025-02-13T20:28:28.755313048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:28.756080 containerd[1543]: time="2025-02-13T20:28:28.755950329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:28:28.756080 containerd[1543]: time="2025-02-13T20:28:28.756035103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:28:28.756080 containerd[1543]: time="2025-02-13T20:28:28.756048698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:28.756194 containerd[1543]: time="2025-02-13T20:28:28.756136791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:28.759596 containerd[1543]: time="2025-02-13T20:28:28.756277187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:28:28.759596 containerd[1543]: time="2025-02-13T20:28:28.756388272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:28:28.759596 containerd[1543]: time="2025-02-13T20:28:28.756409146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:28.759596 containerd[1543]: time="2025-02-13T20:28:28.756492920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:28.778195 kubelet[2298]: W0213 20:28:28.778124 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:28:28.778195 kubelet[2298]: E0213 20:28:28.778197 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:28:28.805984 containerd[1543]: time="2025-02-13T20:28:28.805940770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"60b35921a93f09f61a75735d5eaf41ce694c7080b652eb7d07effa6bafa56acd\"" Feb 13 20:28:28.818288 containerd[1543]: time="2025-02-13T20:28:28.818246411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:30b89b75cae84ccf9ce2cd85783b1230,Namespace:kube-system,Attempt:0,} returns sandbox id \"395e3428e0c06080751227dcbe9f474de86e7e898c34a3ca722972d58ad18922\"" Feb 13 20:28:28.819390 containerd[1543]: time="2025-02-13T20:28:28.819341589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"81bfc5ef9305ace119b80236542cd03dfb29ad4298d696f00f954afa8d682830\"" Feb 13 20:28:28.824021 kubelet[2298]: E0213 20:28:28.821750 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:28.824021 kubelet[2298]: E0213 20:28:28.823557 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:28.824397 kubelet[2298]: E0213 20:28:28.824206 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:28.825800 containerd[1543]: time="2025-02-13T20:28:28.825775021Z" level=info msg="CreateContainer within sandbox \"60b35921a93f09f61a75735d5eaf41ce694c7080b652eb7d07effa6bafa56acd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:28:28.826045 containerd[1543]: time="2025-02-13T20:28:28.826021105Z" level=info msg="CreateContainer within sandbox \"395e3428e0c06080751227dcbe9f474de86e7e898c34a3ca722972d58ad18922\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:28:28.827137 containerd[1543]: time="2025-02-13T20:28:28.827110485Z" level=info msg="CreateContainer within sandbox \"81bfc5ef9305ace119b80236542cd03dfb29ad4298d696f00f954afa8d682830\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:28:28.865783 containerd[1543]: time="2025-02-13T20:28:28.865731314Z" level=info msg="CreateContainer within sandbox \"395e3428e0c06080751227dcbe9f474de86e7e898c34a3ca722972d58ad18922\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cba33ef083588e487f9973cdbac40e3c1769f698773b2965961905d88cf32771\"" Feb 13 20:28:28.866493 containerd[1543]: time="2025-02-13T20:28:28.866471003Z" level=info msg="StartContainer for \"cba33ef083588e487f9973cdbac40e3c1769f698773b2965961905d88cf32771\"" Feb 13 20:28:28.868763 containerd[1543]: time="2025-02-13T20:28:28.868727259Z" level=info msg="CreateContainer within sandbox \"60b35921a93f09f61a75735d5eaf41ce694c7080b652eb7d07effa6bafa56acd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"892a29dba35847fb13d53f8d100e59fe01881afb2fe10c40a56f90c76ee511b6\"" Feb 13 20:28:28.869585 containerd[1543]: time="2025-02-13T20:28:28.869547043Z" level=info msg="StartContainer for \"892a29dba35847fb13d53f8d100e59fe01881afb2fe10c40a56f90c76ee511b6\"" Feb 13 20:28:28.869763 containerd[1543]: time="2025-02-13T20:28:28.869563438Z" level=info msg="CreateContainer within sandbox \"81bfc5ef9305ace119b80236542cd03dfb29ad4298d696f00f954afa8d682830\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d7c154fe34854653af0f0f9a29b190194b1ac214db9f6b859514bc9d6d4c3f63\"" Feb 13 20:28:28.870991 containerd[1543]: time="2025-02-13T20:28:28.870015897Z" level=info msg="StartContainer for \"d7c154fe34854653af0f0f9a29b190194b1ac214db9f6b859514bc9d6d4c3f63\"" Feb 13 20:28:28.899021 kubelet[2298]: W0213 20:28:28.898720 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:28:28.899021 kubelet[2298]: E0213 20:28:28.898778 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.9:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:28:28.912292 kubelet[2298]: W0213 20:28:28.912219 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:28:28.912292 kubelet[2298]: E0213 20:28:28.912281 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:28:28.953681 kubelet[2298]: E0213 20:28:28.953635 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="1.6s" Feb 13 20:28:28.965307 containerd[1543]: time="2025-02-13T20:28:28.961284618Z" level=info msg="StartContainer for \"cba33ef083588e487f9973cdbac40e3c1769f698773b2965961905d88cf32771\" returns successfully" Feb 13 20:28:28.965307 containerd[1543]: time="2025-02-13T20:28:28.961322566Z" level=info msg="StartContainer for \"d7c154fe34854653af0f0f9a29b190194b1ac214db9f6b859514bc9d6d4c3f63\" returns successfully" Feb 13 20:28:28.965307 containerd[1543]: time="2025-02-13T20:28:28.961310809Z" level=info msg="StartContainer for \"892a29dba35847fb13d53f8d100e59fe01881afb2fe10c40a56f90c76ee511b6\" returns successfully" Feb 13 20:28:29.055888 kubelet[2298]: I0213 20:28:29.054780 2298 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:28:29.055888 kubelet[2298]: E0213 20:28:29.055116 2298 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Feb 13 20:28:29.580705 kubelet[2298]: E0213 20:28:29.579159 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:29.583538 kubelet[2298]: E0213 20:28:29.583477 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:29.585328 kubelet[2298]: E0213 20:28:29.585308 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:30.557230 kubelet[2298]: E0213 20:28:30.557187 2298 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 20:28:30.586866 kubelet[2298]: E0213 20:28:30.586834 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:30.656883 kubelet[2298]: I0213 20:28:30.656472 2298 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:28:30.664761 kubelet[2298]: I0213 20:28:30.664725 2298 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 20:28:30.671259 kubelet[2298]: E0213 20:28:30.671201 2298 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:28:30.771815 kubelet[2298]: E0213 20:28:30.771775 2298 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:28:30.872265 kubelet[2298]: E0213 20:28:30.872145 2298 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:28:31.537152 kubelet[2298]: I0213 20:28:31.536944 2298 apiserver.go:52] "Watching apiserver" Feb 13 20:28:31.543233 kubelet[2298]: I0213 20:28:31.543211 2298 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:28:32.134953 systemd[1]: Reloading requested from client PID 2574 ('systemctl') (unit session-5.scope)... Feb 13 20:28:32.134968 systemd[1]: Reloading... Feb 13 20:28:32.186665 zram_generator::config[2614]: No configuration found. Feb 13 20:28:32.272553 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:28:32.327638 systemd[1]: Reloading finished in 192 ms. Feb 13 20:28:32.352907 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:32.353066 kubelet[2298]: I0213 20:28:32.352899 2298 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:28:32.353066 kubelet[2298]: E0213 20:28:32.352835 2298 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.1823de7c884d5b4f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:28:27.539225423 +0000 UTC m=+0.870387527,LastTimestamp:2025-02-13 20:28:27.539225423 +0000 UTC m=+0.870387527,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:28:32.367021 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:28:32.367313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:32.378945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:32.458758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:32.462157 (kubelet)[2665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:28:32.496324 kubelet[2665]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:28:32.496324 kubelet[2665]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:28:32.496324 kubelet[2665]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:28:32.496699 kubelet[2665]: I0213 20:28:32.496369 2665 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:28:32.500052 kubelet[2665]: I0213 20:28:32.500019 2665 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:28:32.500052 kubelet[2665]: I0213 20:28:32.500046 2665 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:28:32.500215 kubelet[2665]: I0213 20:28:32.500201 2665 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:28:32.501522 kubelet[2665]: I0213 20:28:32.501499 2665 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:28:32.502676 kubelet[2665]: I0213 20:28:32.502594 2665 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:28:32.507231 kubelet[2665]: I0213 20:28:32.507202 2665 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:28:32.507565 kubelet[2665]: I0213 20:28:32.507532 2665 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:28:32.507718 kubelet[2665]: I0213 20:28:32.507558 2665 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:28:32.507718 kubelet[2665]: I0213 20:28:32.507718 2665 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:28:32.507811 kubelet[2665]: I0213 20:28:32.507726 2665 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:28:32.507811 kubelet[2665]: I0213 20:28:32.507757 2665 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:28:32.507867 kubelet[2665]: I0213 20:28:32.507838 2665 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:28:32.507867 kubelet[2665]: I0213 20:28:32.507848 2665 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:28:32.507910 kubelet[2665]: I0213 20:28:32.507872 2665 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:28:32.507910 kubelet[2665]: I0213 20:28:32.507888 2665 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:28:32.508290 kubelet[2665]: I0213 20:28:32.508270 2665 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:28:32.508412 kubelet[2665]: I0213 20:28:32.508399 2665 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:28:32.509729 kubelet[2665]: I0213 20:28:32.509705 2665 server.go:1264] "Started kubelet" Feb 13 20:28:32.510650 kubelet[2665]: I0213 20:28:32.510010 2665 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:28:32.510650 kubelet[2665]: I0213 20:28:32.510235 2665 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:28:32.511427 kubelet[2665]: I0213 20:28:32.511405 2665 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:28:32.516443 kubelet[2665]: I0213 20:28:32.515969 2665 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:28:32.517081 kubelet[2665]: I0213 20:28:32.517055 2665 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:28:32.518207 kubelet[2665]: I0213 20:28:32.518169 2665 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:28:32.521651 kubelet[2665]: I0213 20:28:32.520281 2665 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:28:32.521651 kubelet[2665]: I0213 20:28:32.520604 2665 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:28:32.528521 kubelet[2665]: I0213 20:28:32.527018 2665 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:28:32.529477 kubelet[2665]: I0213 20:28:32.529442 2665 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:28:32.529517 kubelet[2665]: I0213 20:28:32.529481 2665 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:28:32.529517 kubelet[2665]: I0213 20:28:32.529497 2665 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:28:32.529998 kubelet[2665]: E0213 20:28:32.529540 2665 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:28:32.532487 kubelet[2665]: E0213 20:28:32.532265 2665 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:28:32.533663 kubelet[2665]: I0213 20:28:32.533612 2665 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:28:32.533733 kubelet[2665]: I0213 20:28:32.533721 2665 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:28:32.537132 kubelet[2665]: I0213 20:28:32.536786 2665 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:28:32.577563 kubelet[2665]: I0213 20:28:32.577540 2665 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:28:32.577563 kubelet[2665]: I0213 20:28:32.577557 2665 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:28:32.577563 kubelet[2665]: I0213 20:28:32.577574 2665 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:28:32.577737 kubelet[2665]: I0213 20:28:32.577716 2665 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:28:32.577737 kubelet[2665]: I0213 20:28:32.577726 2665 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:28:32.577786 kubelet[2665]: I0213 20:28:32.577742 2665 policy_none.go:49] "None policy: Start" Feb 13 20:28:32.578277 kubelet[2665]: I0213 20:28:32.578261 2665 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:28:32.578318 kubelet[2665]: I0213 20:28:32.578289 2665 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:28:32.578448 kubelet[2665]: I0213 20:28:32.578421 2665 state_mem.go:75] "Updated machine memory state" Feb 13 20:28:32.579560 kubelet[2665]: I0213 20:28:32.579540 2665 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:28:32.580233 kubelet[2665]: I0213 20:28:32.579729 2665 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:28:32.580233 kubelet[2665]: I0213 20:28:32.579819 2665 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:28:32.622053 kubelet[2665]: I0213 20:28:32.622034 2665 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:28:32.628920 kubelet[2665]: I0213 20:28:32.628881 2665 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 20:28:32.628987 kubelet[2665]: I0213 20:28:32.628961 2665 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 20:28:32.630373 kubelet[2665]: I0213 20:28:32.629781 2665 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 20:28:32.630373 kubelet[2665]: I0213 20:28:32.629878 2665 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 20:28:32.630373 kubelet[2665]: I0213 20:28:32.629911 2665 topology_manager.go:215] "Topology Admit Handler" podUID="30b89b75cae84ccf9ce2cd85783b1230" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 20:28:32.721550 kubelet[2665]: I0213 20:28:32.721436 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30b89b75cae84ccf9ce2cd85783b1230-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"30b89b75cae84ccf9ce2cd85783b1230\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:32.721550 kubelet[2665]: I0213 20:28:32.721477 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30b89b75cae84ccf9ce2cd85783b1230-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"30b89b75cae84ccf9ce2cd85783b1230\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:32.721550 kubelet[2665]: I0213 20:28:32.721508 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:28:32.721550 kubelet[2665]: I0213 20:28:32.721526 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30b89b75cae84ccf9ce2cd85783b1230-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"30b89b75cae84ccf9ce2cd85783b1230\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:32.721550 kubelet[2665]: I0213 20:28:32.721541 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:32.721757 kubelet[2665]: I0213 20:28:32.721557 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:32.721757 kubelet[2665]: I0213 20:28:32.721579 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:32.721757 kubelet[2665]: I0213 20:28:32.721594 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:32.721757 kubelet[2665]: I0213 20:28:32.721609 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:32.953579 kubelet[2665]: E0213 20:28:32.953495 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:32.953579 kubelet[2665]: E0213 20:28:32.953533 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:32.954282 kubelet[2665]: E0213 20:28:32.954239 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:33.509205 kubelet[2665]: I0213 20:28:33.508128 2665 apiserver.go:52] "Watching apiserver" Feb 13 20:28:33.521424 kubelet[2665]: I0213 20:28:33.521377 2665 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:28:33.552851 kubelet[2665]: E0213 20:28:33.552139 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:33.553288 kubelet[2665]: E0213 20:28:33.553245 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:33.557613 kubelet[2665]: E0213 20:28:33.557589 2665 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:33.558028 kubelet[2665]: E0213 20:28:33.557998 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:33.580843 kubelet[2665]: I0213 20:28:33.580758 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.580726779 podStartE2EDuration="1.580726779s" podCreationTimestamp="2025-02-13 20:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:28:33.573042935 +0000 UTC m=+1.107916890" watchObservedRunningTime="2025-02-13 20:28:33.580726779 +0000 UTC m=+1.115600695" Feb 13 20:28:33.580969 kubelet[2665]: I0213 20:28:33.580937 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5809307719999999 podStartE2EDuration="1.580930772s" podCreationTimestamp="2025-02-13 20:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:28:33.580382071 +0000 UTC m=+1.115255987" watchObservedRunningTime="2025-02-13 20:28:33.580930772 +0000 UTC m=+1.115804688" Feb 13 20:28:33.593179 kubelet[2665]: I0213 20:28:33.593136 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.593126254 podStartE2EDuration="1.593126254s" podCreationTimestamp="2025-02-13 20:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:28:33.586341777 +0000 UTC m=+1.121215693" watchObservedRunningTime="2025-02-13 20:28:33.593126254 +0000 UTC m=+1.128000130" Feb 13 20:28:33.971603 sudo[1704]: pam_unix(sudo:session): session closed for user root Feb 13 20:28:33.973168 sshd[1697]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:33.976429 systemd[1]: sshd@4-10.0.0.9:22-10.0.0.1:35052.service: Deactivated successfully. Feb 13 20:28:33.978424 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:28:33.978648 systemd-logind[1523]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:28:33.981078 systemd-logind[1523]: Removed session 5. Feb 13 20:28:34.553871 kubelet[2665]: E0213 20:28:34.553808 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:37.906567 kubelet[2665]: E0213 20:28:37.906504 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:40.842146 kubelet[2665]: E0213 20:28:40.842109 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:41.566447 kubelet[2665]: E0213 20:28:41.566372 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:42.465972 kubelet[2665]: E0213 20:28:42.465931 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:42.568146 kubelet[2665]: E0213 20:28:42.568107 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:46.085721 update_engine[1530]: I20250213 20:28:46.085655 1530 update_attempter.cc:509] Updating boot flags... Feb 13 20:28:46.102670 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2736) Feb 13 20:28:46.131666 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2735) Feb 13 20:28:46.150731 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2735) Feb 13 20:28:47.914153 kubelet[2665]: E0213 20:28:47.914116 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:48.285704 kubelet[2665]: I0213 20:28:48.285666 2665 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:28:48.285994 containerd[1543]: time="2025-02-13T20:28:48.285953436Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:28:48.286293 kubelet[2665]: I0213 20:28:48.286107 2665 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:28:48.853209 kubelet[2665]: I0213 20:28:48.853157 2665 topology_manager.go:215] "Topology Admit Handler" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" podNamespace="kube-flannel" podName="kube-flannel-ds-5hbww" Feb 13 20:28:48.854318 kubelet[2665]: I0213 20:28:48.854283 2665 topology_manager.go:215] "Topology Admit Handler" podUID="54e4a05a-4aeb-4d1e-bc40-b16fbd5f7190" podNamespace="kube-system" podName="kube-proxy-qt6lv" Feb 13 20:28:49.031270 kubelet[2665]: I0213 20:28:49.031098 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3f8b4fef-0cee-4a5a-b509-119c847b6786-run\") pod \"kube-flannel-ds-5hbww\" (UID: \"3f8b4fef-0cee-4a5a-b509-119c847b6786\") " pod="kube-flannel/kube-flannel-ds-5hbww" Feb 13 20:28:49.031270 kubelet[2665]: I0213 20:28:49.031149 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/3f8b4fef-0cee-4a5a-b509-119c847b6786-cni\") pod \"kube-flannel-ds-5hbww\" (UID: \"3f8b4fef-0cee-4a5a-b509-119c847b6786\") " pod="kube-flannel/kube-flannel-ds-5hbww" Feb 13 20:28:49.031270 kubelet[2665]: I0213 20:28:49.031167 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/54e4a05a-4aeb-4d1e-bc40-b16fbd5f7190-kube-proxy\") pod \"kube-proxy-qt6lv\" (UID: \"54e4a05a-4aeb-4d1e-bc40-b16fbd5f7190\") " pod="kube-system/kube-proxy-qt6lv" Feb 13 20:28:49.031270 kubelet[2665]: I0213 20:28:49.031187 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54e4a05a-4aeb-4d1e-bc40-b16fbd5f7190-xtables-lock\") pod \"kube-proxy-qt6lv\" (UID: \"54e4a05a-4aeb-4d1e-bc40-b16fbd5f7190\") " pod="kube-system/kube-proxy-qt6lv" Feb 13 20:28:49.031270 kubelet[2665]: I0213 20:28:49.031226 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54e4a05a-4aeb-4d1e-bc40-b16fbd5f7190-lib-modules\") pod \"kube-proxy-qt6lv\" (UID: \"54e4a05a-4aeb-4d1e-bc40-b16fbd5f7190\") " pod="kube-system/kube-proxy-qt6lv" Feb 13 20:28:49.031830 kubelet[2665]: I0213 20:28:49.031245 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5s6f\" (UniqueName: \"kubernetes.io/projected/54e4a05a-4aeb-4d1e-bc40-b16fbd5f7190-kube-api-access-z5s6f\") pod \"kube-proxy-qt6lv\" (UID: \"54e4a05a-4aeb-4d1e-bc40-b16fbd5f7190\") " pod="kube-system/kube-proxy-qt6lv" Feb 13 20:28:49.031830 kubelet[2665]: I0213 20:28:49.031264 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f8b4fef-0cee-4a5a-b509-119c847b6786-xtables-lock\") pod \"kube-flannel-ds-5hbww\" (UID: \"3f8b4fef-0cee-4a5a-b509-119c847b6786\") " pod="kube-flannel/kube-flannel-ds-5hbww" Feb 13 20:28:49.031830 kubelet[2665]: I0213 20:28:49.031279 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/3f8b4fef-0cee-4a5a-b509-119c847b6786-cni-plugin\") pod \"kube-flannel-ds-5hbww\" (UID: \"3f8b4fef-0cee-4a5a-b509-119c847b6786\") " pod="kube-flannel/kube-flannel-ds-5hbww" Feb 13 20:28:49.031830 kubelet[2665]: I0213 20:28:49.031295 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/3f8b4fef-0cee-4a5a-b509-119c847b6786-flannel-cfg\") pod \"kube-flannel-ds-5hbww\" (UID: \"3f8b4fef-0cee-4a5a-b509-119c847b6786\") " pod="kube-flannel/kube-flannel-ds-5hbww" Feb 13 20:28:49.031830 kubelet[2665]: I0213 20:28:49.031311 2665 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr76x\" (UniqueName: \"kubernetes.io/projected/3f8b4fef-0cee-4a5a-b509-119c847b6786-kube-api-access-xr76x\") pod \"kube-flannel-ds-5hbww\" (UID: \"3f8b4fef-0cee-4a5a-b509-119c847b6786\") " pod="kube-flannel/kube-flannel-ds-5hbww" Feb 13 20:28:49.157782 kubelet[2665]: E0213 20:28:49.157676 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:49.158915 kubelet[2665]: E0213 20:28:49.158514 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:49.158984 containerd[1543]: time="2025-02-13T20:28:49.158540429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-5hbww,Uid:3f8b4fef-0cee-4a5a-b509-119c847b6786,Namespace:kube-flannel,Attempt:0,}" Feb 13 20:28:49.158984 containerd[1543]: time="2025-02-13T20:28:49.158891583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qt6lv,Uid:54e4a05a-4aeb-4d1e-bc40-b16fbd5f7190,Namespace:kube-system,Attempt:0,}" Feb 13 20:28:49.182276 containerd[1543]: time="2025-02-13T20:28:49.182194146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:28:49.182276 containerd[1543]: time="2025-02-13T20:28:49.182255585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:28:49.182474 containerd[1543]: time="2025-02-13T20:28:49.182270905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:49.182474 containerd[1543]: time="2025-02-13T20:28:49.182365943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:49.186871 containerd[1543]: time="2025-02-13T20:28:49.186802155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:28:49.187382 containerd[1543]: time="2025-02-13T20:28:49.187315667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:28:49.187382 containerd[1543]: time="2025-02-13T20:28:49.187360907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:49.187608 containerd[1543]: time="2025-02-13T20:28:49.187550544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:49.218357 containerd[1543]: time="2025-02-13T20:28:49.218315312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qt6lv,Uid:54e4a05a-4aeb-4d1e-bc40-b16fbd5f7190,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7daf21866d389b56143b9eba7ae062766f831ccafcab6159af6d61b48551b5f\"" Feb 13 20:28:49.221043 kubelet[2665]: E0213 20:28:49.220697 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:49.226688 containerd[1543]: time="2025-02-13T20:28:49.226508946Z" level=info msg="CreateContainer within sandbox \"c7daf21866d389b56143b9eba7ae062766f831ccafcab6159af6d61b48551b5f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:28:49.231572 containerd[1543]: time="2025-02-13T20:28:49.231543989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-5hbww,Uid:3f8b4fef-0cee-4a5a-b509-119c847b6786,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"a829157f03cbe7a7573e0c689c9be29857818e1042c4da5685ae7cdbc4ab396f\"" Feb 13 20:28:49.232336 kubelet[2665]: E0213 20:28:49.232312 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:49.233393 containerd[1543]: time="2025-02-13T20:28:49.233344361Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:28:49.246189 containerd[1543]: time="2025-02-13T20:28:49.246108885Z" level=info msg="CreateContainer within sandbox \"c7daf21866d389b56143b9eba7ae062766f831ccafcab6159af6d61b48551b5f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b2debc117b178609ea7895fad330891dcbee934eb8bb1c30d1309191820b4d11\"" Feb 13 20:28:49.246734 containerd[1543]: time="2025-02-13T20:28:49.246698836Z" level=info msg="StartContainer for \"b2debc117b178609ea7895fad330891dcbee934eb8bb1c30d1309191820b4d11\"" Feb 13 20:28:49.302088 containerd[1543]: time="2025-02-13T20:28:49.301806471Z" level=info msg="StartContainer for \"b2debc117b178609ea7895fad330891dcbee934eb8bb1c30d1309191820b4d11\" returns successfully" Feb 13 20:28:49.581026 kubelet[2665]: E0213 20:28:49.580829 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:50.375044 containerd[1543]: time="2025-02-13T20:28:50.374920153Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:28:50.375044 containerd[1543]: time="2025-02-13T20:28:50.374989832Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:28:50.375760 kubelet[2665]: E0213 20:28:50.375189 2665 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:28:50.375760 kubelet[2665]: E0213 20:28:50.375281 2665 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:28:50.377293 kubelet[2665]: E0213 20:28:50.377238 2665 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xr76x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-5hbww_kube-flannel(3f8b4fef-0cee-4a5a-b509-119c847b6786): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:28:50.377408 kubelet[2665]: E0213 20:28:50.377289 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:28:50.586475 kubelet[2665]: E0213 20:28:50.586257 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:50.586935 kubelet[2665]: E0213 20:28:50.586898 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:28:50.595448 kubelet[2665]: I0213 20:28:50.595384 2665 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qt6lv" podStartSLOduration=2.595355288 podStartE2EDuration="2.595355288s" podCreationTimestamp="2025-02-13 20:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:28:49.591867101 +0000 UTC m=+17.126741017" watchObservedRunningTime="2025-02-13 20:28:50.595355288 +0000 UTC m=+18.130229204" Feb 13 20:29:01.334887 systemd[1]: Started sshd@5-10.0.0.9:22-10.0.0.1:48790.service - OpenSSH per-connection server daemon (10.0.0.1:48790). Feb 13 20:29:01.369968 sshd[2984]: Accepted publickey for core from 10.0.0.1 port 48790 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:01.371336 sshd[2984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:01.374813 systemd-logind[1523]: New session 6 of user core. Feb 13 20:29:01.389846 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:29:01.502046 sshd[2984]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:01.505327 systemd[1]: sshd@5-10.0.0.9:22-10.0.0.1:48790.service: Deactivated successfully. Feb 13 20:29:01.507835 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:29:01.507888 systemd-logind[1523]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:29:01.509961 systemd-logind[1523]: Removed session 6. Feb 13 20:29:01.530950 kubelet[2665]: E0213 20:29:01.530903 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:29:01.534935 containerd[1543]: time="2025-02-13T20:29:01.534894076Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:29:02.688215 containerd[1543]: time="2025-02-13T20:29:02.688151148Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:29:02.688602 containerd[1543]: time="2025-02-13T20:29:02.688225467Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:29:02.688664 kubelet[2665]: E0213 20:29:02.688319 2665 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:29:02.688664 kubelet[2665]: E0213 20:29:02.688357 2665 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:29:02.688926 kubelet[2665]: E0213 20:29:02.688426 2665 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xr76x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-5hbww_kube-flannel(3f8b4fef-0cee-4a5a-b509-119c847b6786): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:29:02.688985 kubelet[2665]: E0213 20:29:02.688455 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:29:06.513873 systemd[1]: Started sshd@6-10.0.0.9:22-10.0.0.1:35092.service - OpenSSH per-connection server daemon (10.0.0.1:35092). Feb 13 20:29:06.549743 sshd[3000]: Accepted publickey for core from 10.0.0.1 port 35092 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:06.550928 sshd[3000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:06.554718 systemd-logind[1523]: New session 7 of user core. Feb 13 20:29:06.566864 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:29:06.677868 sshd[3000]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:06.680973 systemd[1]: sshd@6-10.0.0.9:22-10.0.0.1:35092.service: Deactivated successfully. Feb 13 20:29:06.683692 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:29:06.684448 systemd-logind[1523]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:29:06.685377 systemd-logind[1523]: Removed session 7. Feb 13 20:29:11.686851 systemd[1]: Started sshd@7-10.0.0.9:22-10.0.0.1:35106.service - OpenSSH per-connection server daemon (10.0.0.1:35106). Feb 13 20:29:11.721746 sshd[3016]: Accepted publickey for core from 10.0.0.1 port 35106 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:11.722959 sshd[3016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:11.726874 systemd-logind[1523]: New session 8 of user core. Feb 13 20:29:11.742850 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:29:11.849119 sshd[3016]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:11.852174 systemd[1]: sshd@7-10.0.0.9:22-10.0.0.1:35106.service: Deactivated successfully. Feb 13 20:29:11.854153 systemd-logind[1523]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:29:11.854242 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:29:11.855412 systemd-logind[1523]: Removed session 8. Feb 13 20:29:16.859872 systemd[1]: Started sshd@8-10.0.0.9:22-10.0.0.1:46136.service - OpenSSH per-connection server daemon (10.0.0.1:46136). Feb 13 20:29:16.894374 sshd[3033]: Accepted publickey for core from 10.0.0.1 port 46136 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:16.895781 sshd[3033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:16.899541 systemd-logind[1523]: New session 9 of user core. Feb 13 20:29:16.910011 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:29:17.012158 sshd[3033]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:17.015206 systemd[1]: sshd@8-10.0.0.9:22-10.0.0.1:46136.service: Deactivated successfully. Feb 13 20:29:17.017209 systemd-logind[1523]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:29:17.017268 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:29:17.018495 systemd-logind[1523]: Removed session 9. Feb 13 20:29:17.530054 kubelet[2665]: E0213 20:29:17.530018 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:29:17.531022 kubelet[2665]: E0213 20:29:17.530566 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:29:22.022846 systemd[1]: Started sshd@9-10.0.0.9:22-10.0.0.1:46140.service - OpenSSH per-connection server daemon (10.0.0.1:46140). Feb 13 20:29:22.057278 sshd[3052]: Accepted publickey for core from 10.0.0.1 port 46140 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:22.058490 sshd[3052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:22.062197 systemd-logind[1523]: New session 10 of user core. Feb 13 20:29:22.074919 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:29:22.180592 sshd[3052]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:22.183803 systemd[1]: sshd@9-10.0.0.9:22-10.0.0.1:46140.service: Deactivated successfully. Feb 13 20:29:22.185983 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:29:22.186087 systemd-logind[1523]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:29:22.187230 systemd-logind[1523]: Removed session 10. Feb 13 20:29:27.192829 systemd[1]: Started sshd@10-10.0.0.9:22-10.0.0.1:49466.service - OpenSSH per-connection server daemon (10.0.0.1:49466). Feb 13 20:29:27.227259 sshd[3069]: Accepted publickey for core from 10.0.0.1 port 49466 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:27.228389 sshd[3069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:27.231756 systemd-logind[1523]: New session 11 of user core. Feb 13 20:29:27.246854 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:29:27.354251 sshd[3069]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:27.358977 systemd[1]: sshd@10-10.0.0.9:22-10.0.0.1:49466.service: Deactivated successfully. Feb 13 20:29:27.361117 systemd-logind[1523]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:29:27.361525 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:29:27.363339 systemd-logind[1523]: Removed session 11. Feb 13 20:29:28.530539 kubelet[2665]: E0213 20:29:28.530331 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:29:28.531904 containerd[1543]: time="2025-02-13T20:29:28.531871929Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:29:29.640000 containerd[1543]: time="2025-02-13T20:29:29.639941813Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:29:29.640384 containerd[1543]: time="2025-02-13T20:29:29.640022813Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:29:29.640425 kubelet[2665]: E0213 20:29:29.640126 2665 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:29:29.640425 kubelet[2665]: E0213 20:29:29.640172 2665 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:29:29.640746 kubelet[2665]: E0213 20:29:29.640274 2665 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xr76x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-5hbww_kube-flannel(3f8b4fef-0cee-4a5a-b509-119c847b6786): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:29:29.640813 kubelet[2665]: E0213 20:29:29.640304 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:29:32.369880 systemd[1]: Started sshd@11-10.0.0.9:22-10.0.0.1:49470.service - OpenSSH per-connection server daemon (10.0.0.1:49470). Feb 13 20:29:32.405714 sshd[3086]: Accepted publickey for core from 10.0.0.1 port 49470 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:32.406858 sshd[3086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:32.410695 systemd-logind[1523]: New session 12 of user core. Feb 13 20:29:32.416904 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:29:32.525836 sshd[3086]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:32.528825 systemd[1]: sshd@11-10.0.0.9:22-10.0.0.1:49470.service: Deactivated successfully. Feb 13 20:29:32.530794 systemd-logind[1523]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:29:32.530801 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:29:32.532682 systemd-logind[1523]: Removed session 12. Feb 13 20:29:37.535857 systemd[1]: Started sshd@12-10.0.0.9:22-10.0.0.1:55810.service - OpenSSH per-connection server daemon (10.0.0.1:55810). Feb 13 20:29:37.570617 sshd[3104]: Accepted publickey for core from 10.0.0.1 port 55810 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:37.571771 sshd[3104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:37.575087 systemd-logind[1523]: New session 13 of user core. Feb 13 20:29:37.581852 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:29:37.694348 sshd[3104]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:37.697133 systemd-logind[1523]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:29:37.697252 systemd[1]: sshd@12-10.0.0.9:22-10.0.0.1:55810.service: Deactivated successfully. Feb 13 20:29:37.699614 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:29:37.700256 systemd-logind[1523]: Removed session 13. Feb 13 20:29:41.530246 kubelet[2665]: E0213 20:29:41.530168 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:29:41.530899 kubelet[2665]: E0213 20:29:41.530746 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:29:42.709861 systemd[1]: Started sshd@13-10.0.0.9:22-10.0.0.1:42000.service - OpenSSH per-connection server daemon (10.0.0.1:42000). Feb 13 20:29:42.745835 sshd[3121]: Accepted publickey for core from 10.0.0.1 port 42000 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:42.747139 sshd[3121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:42.751048 systemd-logind[1523]: New session 14 of user core. Feb 13 20:29:42.765870 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:29:42.873041 sshd[3121]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:42.876065 systemd[1]: sshd@13-10.0.0.9:22-10.0.0.1:42000.service: Deactivated successfully. Feb 13 20:29:42.877987 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:29:42.878004 systemd-logind[1523]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:29:42.879348 systemd-logind[1523]: Removed session 14. Feb 13 20:29:47.888845 systemd[1]: Started sshd@14-10.0.0.9:22-10.0.0.1:42012.service - OpenSSH per-connection server daemon (10.0.0.1:42012). Feb 13 20:29:47.923040 sshd[3139]: Accepted publickey for core from 10.0.0.1 port 42012 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:47.924235 sshd[3139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:47.927547 systemd-logind[1523]: New session 15 of user core. Feb 13 20:29:47.942880 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:29:48.049609 sshd[3139]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:48.053184 systemd[1]: sshd@14-10.0.0.9:22-10.0.0.1:42012.service: Deactivated successfully. Feb 13 20:29:48.055732 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:29:48.056507 systemd-logind[1523]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:29:48.057776 systemd-logind[1523]: Removed session 15. Feb 13 20:29:51.531061 kubelet[2665]: E0213 20:29:51.530972 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:29:52.530664 kubelet[2665]: E0213 20:29:52.530609 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:29:52.531354 kubelet[2665]: E0213 20:29:52.531231 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:29:53.061844 systemd[1]: Started sshd@15-10.0.0.9:22-10.0.0.1:55290.service - OpenSSH per-connection server daemon (10.0.0.1:55290). Feb 13 20:29:53.096237 sshd[3157]: Accepted publickey for core from 10.0.0.1 port 55290 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:53.097403 sshd[3157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:53.101152 systemd-logind[1523]: New session 16 of user core. Feb 13 20:29:53.112838 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:29:53.216786 sshd[3157]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:53.219800 systemd-logind[1523]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:29:53.220556 systemd[1]: sshd@15-10.0.0.9:22-10.0.0.1:55290.service: Deactivated successfully. Feb 13 20:29:53.222409 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:29:53.222936 systemd-logind[1523]: Removed session 16. Feb 13 20:29:53.531298 kubelet[2665]: E0213 20:29:53.531240 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:29:58.234933 systemd[1]: Started sshd@16-10.0.0.9:22-10.0.0.1:55294.service - OpenSSH per-connection server daemon (10.0.0.1:55294). Feb 13 20:29:58.269493 sshd[3174]: Accepted publickey for core from 10.0.0.1 port 55294 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:58.270695 sshd[3174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:58.274537 systemd-logind[1523]: New session 17 of user core. Feb 13 20:29:58.284980 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:29:58.389459 sshd[3174]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:58.392552 systemd[1]: sshd@16-10.0.0.9:22-10.0.0.1:55294.service: Deactivated successfully. Feb 13 20:29:58.395964 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:29:58.396923 systemd-logind[1523]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:29:58.397837 systemd-logind[1523]: Removed session 17. Feb 13 20:29:58.530818 kubelet[2665]: E0213 20:29:58.530353 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:30:03.400855 systemd[1]: Started sshd@17-10.0.0.9:22-10.0.0.1:41062.service - OpenSSH per-connection server daemon (10.0.0.1:41062). Feb 13 20:30:03.435638 sshd[3190]: Accepted publickey for core from 10.0.0.1 port 41062 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:03.436785 sshd[3190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:03.440177 systemd-logind[1523]: New session 18 of user core. Feb 13 20:30:03.444905 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:30:03.548854 sshd[3190]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:03.551294 systemd[1]: sshd@17-10.0.0.9:22-10.0.0.1:41062.service: Deactivated successfully. Feb 13 20:30:03.553790 systemd-logind[1523]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:30:03.553929 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:30:03.554979 systemd-logind[1523]: Removed session 18. Feb 13 20:30:04.531062 kubelet[2665]: E0213 20:30:04.530865 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:30:04.531881 kubelet[2665]: E0213 20:30:04.531383 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:30:08.531020 kubelet[2665]: E0213 20:30:08.530978 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:30:08.559911 systemd[1]: Started sshd@18-10.0.0.9:22-10.0.0.1:41066.service - OpenSSH per-connection server daemon (10.0.0.1:41066). Feb 13 20:30:08.594372 sshd[3206]: Accepted publickey for core from 10.0.0.1 port 41066 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:08.595541 sshd[3206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:08.599288 systemd-logind[1523]: New session 19 of user core. Feb 13 20:30:08.612922 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:30:08.717825 sshd[3206]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:08.720393 systemd[1]: sshd@18-10.0.0.9:22-10.0.0.1:41066.service: Deactivated successfully. Feb 13 20:30:08.723040 systemd-logind[1523]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:30:08.723121 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:30:08.724177 systemd-logind[1523]: Removed session 19. Feb 13 20:30:13.728854 systemd[1]: Started sshd@19-10.0.0.9:22-10.0.0.1:49202.service - OpenSSH per-connection server daemon (10.0.0.1:49202). Feb 13 20:30:13.763604 sshd[3223]: Accepted publickey for core from 10.0.0.1 port 49202 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:13.764796 sshd[3223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:13.768203 systemd-logind[1523]: New session 20 of user core. Feb 13 20:30:13.777832 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:30:13.886110 sshd[3223]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:13.889226 systemd[1]: sshd@19-10.0.0.9:22-10.0.0.1:49202.service: Deactivated successfully. Feb 13 20:30:13.891072 systemd-logind[1523]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:30:13.891141 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:30:13.892278 systemd-logind[1523]: Removed session 20. Feb 13 20:30:14.074200 update_engine[1530]: I20250213 20:30:14.074064 1530 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 20:30:14.074200 update_engine[1530]: I20250213 20:30:14.074126 1530 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 20:30:14.074531 update_engine[1530]: I20250213 20:30:14.074357 1530 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 20:30:14.074778 update_engine[1530]: I20250213 20:30:14.074736 1530 omaha_request_params.cc:62] Current group set to lts Feb 13 20:30:14.074843 update_engine[1530]: I20250213 20:30:14.074831 1530 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 20:30:14.074868 update_engine[1530]: I20250213 20:30:14.074841 1530 update_attempter.cc:643] Scheduling an action processor start. Feb 13 20:30:14.074868 update_engine[1530]: I20250213 20:30:14.074856 1530 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:30:14.074907 update_engine[1530]: I20250213 20:30:14.074880 1530 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 20:30:14.074945 update_engine[1530]: I20250213 20:30:14.074927 1530 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:30:14.074945 update_engine[1530]: I20250213 20:30:14.074939 1530 omaha_request_action.cc:272] Request: Feb 13 20:30:14.074945 update_engine[1530]: Feb 13 20:30:14.074945 update_engine[1530]: Feb 13 20:30:14.074945 update_engine[1530]: Feb 13 20:30:14.074945 update_engine[1530]: Feb 13 20:30:14.074945 update_engine[1530]: Feb 13 20:30:14.074945 update_engine[1530]: Feb 13 20:30:14.074945 update_engine[1530]: Feb 13 20:30:14.074945 update_engine[1530]: Feb 13 20:30:14.075142 update_engine[1530]: I20250213 20:30:14.074946 1530 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:30:14.075165 locksmithd[1557]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 20:30:14.075980 update_engine[1530]: I20250213 20:30:14.075944 1530 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:30:14.076202 update_engine[1530]: I20250213 20:30:14.076166 1530 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:30:14.080041 update_engine[1530]: E20250213 20:30:14.080004 1530 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:30:14.080093 update_engine[1530]: I20250213 20:30:14.080070 1530 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 20:30:18.530546 kubelet[2665]: E0213 20:30:18.530280 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:30:18.531485 containerd[1543]: time="2025-02-13T20:30:18.531431174Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:30:18.899855 systemd[1]: Started sshd@20-10.0.0.9:22-10.0.0.1:49216.service - OpenSSH per-connection server daemon (10.0.0.1:49216). Feb 13 20:30:18.934211 sshd[3239]: Accepted publickey for core from 10.0.0.1 port 49216 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:18.935361 sshd[3239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:18.939688 systemd-logind[1523]: New session 21 of user core. Feb 13 20:30:18.950837 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:30:19.053908 sshd[3239]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:19.057029 systemd[1]: sshd@20-10.0.0.9:22-10.0.0.1:49216.service: Deactivated successfully. Feb 13 20:30:19.058787 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:30:19.059134 systemd-logind[1523]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:30:19.060041 systemd-logind[1523]: Removed session 21. Feb 13 20:30:19.650920 containerd[1543]: time="2025-02-13T20:30:19.650872477Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:30:19.651409 containerd[1543]: time="2025-02-13T20:30:19.650968878Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:30:19.651447 kubelet[2665]: E0213 20:30:19.651031 2665 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:30:19.651447 kubelet[2665]: E0213 20:30:19.651072 2665 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:30:19.652116 kubelet[2665]: E0213 20:30:19.652085 2665 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xr76x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-5hbww_kube-flannel(3f8b4fef-0cee-4a5a-b509-119c847b6786): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:30:19.652206 kubelet[2665]: E0213 20:30:19.652120 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:30:24.062854 systemd[1]: Started sshd@21-10.0.0.9:22-10.0.0.1:53686.service - OpenSSH per-connection server daemon (10.0.0.1:53686). Feb 13 20:30:24.079083 update_engine[1530]: I20250213 20:30:24.078709 1530 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:30:24.079083 update_engine[1530]: I20250213 20:30:24.078907 1530 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:30:24.079083 update_engine[1530]: I20250213 20:30:24.079051 1530 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:30:24.084201 update_engine[1530]: E20250213 20:30:24.084131 1530 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:30:24.084201 update_engine[1530]: I20250213 20:30:24.084182 1530 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 20:30:24.097332 sshd[3257]: Accepted publickey for core from 10.0.0.1 port 53686 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:24.098461 sshd[3257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:24.101965 systemd-logind[1523]: New session 22 of user core. Feb 13 20:30:24.111880 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:30:24.214360 sshd[3257]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:24.217394 systemd[1]: sshd@21-10.0.0.9:22-10.0.0.1:53686.service: Deactivated successfully. Feb 13 20:30:24.219315 systemd-logind[1523]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:30:24.219738 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:30:24.220582 systemd-logind[1523]: Removed session 22. Feb 13 20:30:29.224850 systemd[1]: Started sshd@22-10.0.0.9:22-10.0.0.1:53694.service - OpenSSH per-connection server daemon (10.0.0.1:53694). Feb 13 20:30:29.259440 sshd[3273]: Accepted publickey for core from 10.0.0.1 port 53694 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:29.260643 sshd[3273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:29.263943 systemd-logind[1523]: New session 23 of user core. Feb 13 20:30:29.281876 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:30:29.387234 sshd[3273]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:29.390413 systemd[1]: sshd@22-10.0.0.9:22-10.0.0.1:53694.service: Deactivated successfully. Feb 13 20:30:29.392266 systemd-logind[1523]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:30:29.392333 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:30:29.393339 systemd-logind[1523]: Removed session 23. Feb 13 20:30:32.593547 kubelet[2665]: E0213 20:30:32.593479 2665 kubelet_node_status.go:456] "Node not becoming ready in time after startup" Feb 13 20:30:32.600880 kubelet[2665]: E0213 20:30:32.600840 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:30:33.530573 kubelet[2665]: E0213 20:30:33.530519 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:30:33.531384 kubelet[2665]: E0213 20:30:33.531239 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:30:34.078587 update_engine[1530]: I20250213 20:30:34.078477 1530 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:30:34.079026 update_engine[1530]: I20250213 20:30:34.078838 1530 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:30:34.079026 update_engine[1530]: I20250213 20:30:34.078994 1530 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:30:34.082900 update_engine[1530]: E20250213 20:30:34.082859 1530 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:30:34.082963 update_engine[1530]: I20250213 20:30:34.082916 1530 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 20:30:34.398857 systemd[1]: Started sshd@23-10.0.0.9:22-10.0.0.1:60806.service - OpenSSH per-connection server daemon (10.0.0.1:60806). Feb 13 20:30:34.433109 sshd[3292]: Accepted publickey for core from 10.0.0.1 port 60806 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:34.434348 sshd[3292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:34.437589 systemd-logind[1523]: New session 24 of user core. Feb 13 20:30:34.447851 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:30:34.553487 sshd[3292]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:34.556523 systemd[1]: sshd@23-10.0.0.9:22-10.0.0.1:60806.service: Deactivated successfully. Feb 13 20:30:34.558715 systemd-logind[1523]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:30:34.558716 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:30:34.560336 systemd-logind[1523]: Removed session 24. Feb 13 20:30:37.601956 kubelet[2665]: E0213 20:30:37.601899 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:30:39.564970 systemd[1]: Started sshd@24-10.0.0.9:22-10.0.0.1:60810.service - OpenSSH per-connection server daemon (10.0.0.1:60810). Feb 13 20:30:39.599313 sshd[3309]: Accepted publickey for core from 10.0.0.1 port 60810 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:39.600523 sshd[3309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:39.604278 systemd-logind[1523]: New session 25 of user core. Feb 13 20:30:39.614851 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:30:39.718163 sshd[3309]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:39.721419 systemd[1]: sshd@24-10.0.0.9:22-10.0.0.1:60810.service: Deactivated successfully. Feb 13 20:30:39.723579 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:30:39.723600 systemd-logind[1523]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:30:39.725749 systemd-logind[1523]: Removed session 25. Feb 13 20:30:42.602812 kubelet[2665]: E0213 20:30:42.602776 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:30:44.078527 update_engine[1530]: I20250213 20:30:44.078435 1530 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:30:44.078987 update_engine[1530]: I20250213 20:30:44.078752 1530 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:30:44.078987 update_engine[1530]: I20250213 20:30:44.078908 1530 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:30:44.083545 update_engine[1530]: E20250213 20:30:44.083498 1530 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:30:44.083646 update_engine[1530]: I20250213 20:30:44.083549 1530 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:30:44.083646 update_engine[1530]: I20250213 20:30:44.083560 1530 omaha_request_action.cc:617] Omaha request response: Feb 13 20:30:44.083698 update_engine[1530]: E20250213 20:30:44.083647 1530 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 20:30:44.083698 update_engine[1530]: I20250213 20:30:44.083665 1530 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 20:30:44.083698 update_engine[1530]: I20250213 20:30:44.083670 1530 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:30:44.083698 update_engine[1530]: I20250213 20:30:44.083675 1530 update_attempter.cc:306] Processing Done. Feb 13 20:30:44.083698 update_engine[1530]: E20250213 20:30:44.083688 1530 update_attempter.cc:619] Update failed. Feb 13 20:30:44.083698 update_engine[1530]: I20250213 20:30:44.083693 1530 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 20:30:44.083698 update_engine[1530]: I20250213 20:30:44.083698 1530 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 20:30:44.083834 update_engine[1530]: I20250213 20:30:44.083703 1530 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 20:30:44.083834 update_engine[1530]: I20250213 20:30:44.083770 1530 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:30:44.083834 update_engine[1530]: I20250213 20:30:44.083789 1530 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:30:44.083834 update_engine[1530]: I20250213 20:30:44.083794 1530 omaha_request_action.cc:272] Request: Feb 13 20:30:44.083834 update_engine[1530]: Feb 13 20:30:44.083834 update_engine[1530]: Feb 13 20:30:44.083834 update_engine[1530]: Feb 13 20:30:44.083834 update_engine[1530]: Feb 13 20:30:44.083834 update_engine[1530]: Feb 13 20:30:44.083834 update_engine[1530]: Feb 13 20:30:44.083834 update_engine[1530]: I20250213 20:30:44.083801 1530 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:30:44.084039 update_engine[1530]: I20250213 20:30:44.083941 1530 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:30:44.084065 update_engine[1530]: I20250213 20:30:44.084053 1530 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:30:44.084133 locksmithd[1557]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 20:30:44.087976 update_engine[1530]: E20250213 20:30:44.087939 1530 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:30:44.088027 update_engine[1530]: I20250213 20:30:44.087985 1530 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:30:44.088027 update_engine[1530]: I20250213 20:30:44.087992 1530 omaha_request_action.cc:617] Omaha request response: Feb 13 20:30:44.088027 update_engine[1530]: I20250213 20:30:44.087998 1530 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:30:44.088027 update_engine[1530]: I20250213 20:30:44.088002 1530 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:30:44.088027 update_engine[1530]: I20250213 20:30:44.088007 1530 update_attempter.cc:306] Processing Done. Feb 13 20:30:44.088027 update_engine[1530]: I20250213 20:30:44.088012 1530 update_attempter.cc:310] Error event sent. Feb 13 20:30:44.088027 update_engine[1530]: I20250213 20:30:44.088019 1530 update_check_scheduler.cc:74] Next update check in 48m38s Feb 13 20:30:44.088265 locksmithd[1557]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 20:30:44.531101 kubelet[2665]: E0213 20:30:44.530721 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:30:44.531532 kubelet[2665]: E0213 20:30:44.531485 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:30:44.728835 systemd[1]: Started sshd@25-10.0.0.9:22-10.0.0.1:47754.service - OpenSSH per-connection server daemon (10.0.0.1:47754). Feb 13 20:30:44.763206 sshd[3325]: Accepted publickey for core from 10.0.0.1 port 47754 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:44.764450 sshd[3325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:44.768285 systemd-logind[1523]: New session 26 of user core. Feb 13 20:30:44.774860 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:30:44.877591 sshd[3325]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:44.880811 systemd[1]: sshd@25-10.0.0.9:22-10.0.0.1:47754.service: Deactivated successfully. Feb 13 20:30:44.882657 systemd-logind[1523]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:30:44.882730 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:30:44.883886 systemd-logind[1523]: Removed session 26. Feb 13 20:30:47.604106 kubelet[2665]: E0213 20:30:47.604055 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:30:49.889832 systemd[1]: Started sshd@26-10.0.0.9:22-10.0.0.1:47766.service - OpenSSH per-connection server daemon (10.0.0.1:47766). Feb 13 20:30:49.924055 sshd[3343]: Accepted publickey for core from 10.0.0.1 port 47766 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:49.925279 sshd[3343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:49.928463 systemd-logind[1523]: New session 27 of user core. Feb 13 20:30:49.938826 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:30:50.046279 sshd[3343]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:50.050736 systemd[1]: sshd@26-10.0.0.9:22-10.0.0.1:47766.service: Deactivated successfully. Feb 13 20:30:50.052670 systemd-logind[1523]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:30:50.052754 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:30:50.054189 systemd-logind[1523]: Removed session 27. Feb 13 20:30:52.605590 kubelet[2665]: E0213 20:30:52.605534 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:30:55.052840 systemd[1]: Started sshd@27-10.0.0.9:22-10.0.0.1:57378.service - OpenSSH per-connection server daemon (10.0.0.1:57378). Feb 13 20:30:55.090634 sshd[3359]: Accepted publickey for core from 10.0.0.1 port 57378 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:55.091761 sshd[3359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:55.096994 systemd-logind[1523]: New session 28 of user core. Feb 13 20:30:55.108963 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:30:55.212890 sshd[3359]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:55.215958 systemd[1]: sshd@27-10.0.0.9:22-10.0.0.1:57378.service: Deactivated successfully. Feb 13 20:30:55.217851 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:30:55.218299 systemd-logind[1523]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:30:55.219359 systemd-logind[1523]: Removed session 28. Feb 13 20:30:57.530656 kubelet[2665]: E0213 20:30:57.530407 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:30:57.538450 kubelet[2665]: E0213 20:30:57.538410 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:30:57.606836 kubelet[2665]: E0213 20:30:57.606798 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:00.232944 systemd[1]: Started sshd@28-10.0.0.9:22-10.0.0.1:57380.service - OpenSSH per-connection server daemon (10.0.0.1:57380). Feb 13 20:31:00.267405 sshd[3375]: Accepted publickey for core from 10.0.0.1 port 57380 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:00.268526 sshd[3375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:00.272580 systemd-logind[1523]: New session 29 of user core. Feb 13 20:31:00.279860 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:31:00.383349 sshd[3375]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:00.385969 systemd[1]: sshd@28-10.0.0.9:22-10.0.0.1:57380.service: Deactivated successfully. Feb 13 20:31:00.388539 systemd-logind[1523]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:31:00.388710 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:31:00.390402 systemd-logind[1523]: Removed session 29. Feb 13 20:31:02.607486 kubelet[2665]: E0213 20:31:02.607446 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:05.390861 systemd[1]: Started sshd@29-10.0.0.9:22-10.0.0.1:41344.service - OpenSSH per-connection server daemon (10.0.0.1:41344). Feb 13 20:31:05.425339 sshd[3392]: Accepted publickey for core from 10.0.0.1 port 41344 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:05.426492 sshd[3392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:05.431212 systemd-logind[1523]: New session 30 of user core. Feb 13 20:31:05.440866 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 20:31:05.542745 sshd[3392]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:05.545871 systemd[1]: sshd@29-10.0.0.9:22-10.0.0.1:41344.service: Deactivated successfully. Feb 13 20:31:05.548030 systemd-logind[1523]: Session 30 logged out. Waiting for processes to exit. Feb 13 20:31:05.548038 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 20:31:05.549188 systemd-logind[1523]: Removed session 30. Feb 13 20:31:07.608202 kubelet[2665]: E0213 20:31:07.608158 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:09.531154 kubelet[2665]: E0213 20:31:09.531094 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:31:10.553884 systemd[1]: Started sshd@30-10.0.0.9:22-10.0.0.1:41352.service - OpenSSH per-connection server daemon (10.0.0.1:41352). Feb 13 20:31:10.588170 sshd[3411]: Accepted publickey for core from 10.0.0.1 port 41352 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:10.589323 sshd[3411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:10.593153 systemd-logind[1523]: New session 31 of user core. Feb 13 20:31:10.601901 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 20:31:10.707402 sshd[3411]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:10.710381 systemd[1]: sshd@30-10.0.0.9:22-10.0.0.1:41352.service: Deactivated successfully. Feb 13 20:31:10.712211 systemd-logind[1523]: Session 31 logged out. Waiting for processes to exit. Feb 13 20:31:10.712266 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 20:31:10.713268 systemd-logind[1523]: Removed session 31. Feb 13 20:31:12.530189 kubelet[2665]: E0213 20:31:12.530112 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:31:12.531108 kubelet[2665]: E0213 20:31:12.530839 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:31:12.609035 kubelet[2665]: E0213 20:31:12.609000 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:15.717889 systemd[1]: Started sshd@31-10.0.0.9:22-10.0.0.1:41592.service - OpenSSH per-connection server daemon (10.0.0.1:41592). Feb 13 20:31:15.752319 sshd[3428]: Accepted publickey for core from 10.0.0.1 port 41592 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:15.753469 sshd[3428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:15.756676 systemd-logind[1523]: New session 32 of user core. Feb 13 20:31:15.763850 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 20:31:15.869853 sshd[3428]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:15.872324 systemd[1]: sshd@31-10.0.0.9:22-10.0.0.1:41592.service: Deactivated successfully. Feb 13 20:31:15.875263 systemd-logind[1523]: Session 32 logged out. Waiting for processes to exit. Feb 13 20:31:15.875766 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 20:31:15.877523 systemd-logind[1523]: Removed session 32. Feb 13 20:31:17.610685 kubelet[2665]: E0213 20:31:17.610646 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:20.880933 systemd[1]: Started sshd@32-10.0.0.9:22-10.0.0.1:41606.service - OpenSSH per-connection server daemon (10.0.0.1:41606). Feb 13 20:31:20.915166 sshd[3446]: Accepted publickey for core from 10.0.0.1 port 41606 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:20.916282 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:20.920373 systemd-logind[1523]: New session 33 of user core. Feb 13 20:31:20.925845 systemd[1]: Started session-33.scope - Session 33 of User core. Feb 13 20:31:21.030386 sshd[3446]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:21.033399 systemd[1]: sshd@32-10.0.0.9:22-10.0.0.1:41606.service: Deactivated successfully. Feb 13 20:31:21.035339 systemd-logind[1523]: Session 33 logged out. Waiting for processes to exit. Feb 13 20:31:21.035401 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 20:31:21.037311 systemd-logind[1523]: Removed session 33. Feb 13 20:31:21.530519 kubelet[2665]: E0213 20:31:21.530481 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:31:22.611330 kubelet[2665]: E0213 20:31:22.611296 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:23.531331 kubelet[2665]: E0213 20:31:23.531290 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:31:24.531197 kubelet[2665]: E0213 20:31:24.530949 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:31:24.531871 kubelet[2665]: E0213 20:31:24.531777 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:31:26.038866 systemd[1]: Started sshd@33-10.0.0.9:22-10.0.0.1:58950.service - OpenSSH per-connection server daemon (10.0.0.1:58950). Feb 13 20:31:26.073579 sshd[3462]: Accepted publickey for core from 10.0.0.1 port 58950 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:26.074799 sshd[3462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:26.078091 systemd-logind[1523]: New session 34 of user core. Feb 13 20:31:26.087856 systemd[1]: Started session-34.scope - Session 34 of User core. Feb 13 20:31:26.193180 sshd[3462]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:26.196588 systemd[1]: sshd@33-10.0.0.9:22-10.0.0.1:58950.service: Deactivated successfully. Feb 13 20:31:26.198553 systemd-logind[1523]: Session 34 logged out. Waiting for processes to exit. Feb 13 20:31:26.198612 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 20:31:26.201242 systemd-logind[1523]: Removed session 34. Feb 13 20:31:27.612851 kubelet[2665]: E0213 20:31:27.612813 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:31.211836 systemd[1]: Started sshd@34-10.0.0.9:22-10.0.0.1:58964.service - OpenSSH per-connection server daemon (10.0.0.1:58964). Feb 13 20:31:31.246223 sshd[3479]: Accepted publickey for core from 10.0.0.1 port 58964 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:31.247376 sshd[3479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:31.251117 systemd-logind[1523]: New session 35 of user core. Feb 13 20:31:31.261918 systemd[1]: Started session-35.scope - Session 35 of User core. Feb 13 20:31:31.367504 sshd[3479]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:31.370776 systemd[1]: sshd@34-10.0.0.9:22-10.0.0.1:58964.service: Deactivated successfully. Feb 13 20:31:31.372742 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 20:31:31.373034 systemd-logind[1523]: Session 35 logged out. Waiting for processes to exit. Feb 13 20:31:31.374107 systemd-logind[1523]: Removed session 35. Feb 13 20:31:32.613905 kubelet[2665]: E0213 20:31:32.613872 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:36.376854 systemd[1]: Started sshd@35-10.0.0.9:22-10.0.0.1:55660.service - OpenSSH per-connection server daemon (10.0.0.1:55660). Feb 13 20:31:36.411179 sshd[3497]: Accepted publickey for core from 10.0.0.1 port 55660 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:36.412291 sshd[3497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:36.415632 systemd-logind[1523]: New session 36 of user core. Feb 13 20:31:36.422850 systemd[1]: Started session-36.scope - Session 36 of User core. Feb 13 20:31:36.526286 sshd[3497]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:36.529155 systemd[1]: sshd@35-10.0.0.9:22-10.0.0.1:55660.service: Deactivated successfully. Feb 13 20:31:36.531052 systemd-logind[1523]: Session 36 logged out. Waiting for processes to exit. Feb 13 20:31:36.531478 kubelet[2665]: E0213 20:31:36.531122 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:31:36.531233 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 20:31:36.533124 systemd-logind[1523]: Removed session 36. Feb 13 20:31:37.530360 kubelet[2665]: E0213 20:31:37.530318 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:31:37.531033 kubelet[2665]: E0213 20:31:37.530984 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:31:37.615113 kubelet[2665]: E0213 20:31:37.615084 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:41.541839 systemd[1]: Started sshd@36-10.0.0.9:22-10.0.0.1:55666.service - OpenSSH per-connection server daemon (10.0.0.1:55666). Feb 13 20:31:41.576013 sshd[3513]: Accepted publickey for core from 10.0.0.1 port 55666 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:41.577128 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:41.580445 systemd-logind[1523]: New session 37 of user core. Feb 13 20:31:41.592920 systemd[1]: Started session-37.scope - Session 37 of User core. Feb 13 20:31:41.699907 sshd[3513]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:41.702437 systemd[1]: sshd@36-10.0.0.9:22-10.0.0.1:55666.service: Deactivated successfully. Feb 13 20:31:41.705029 systemd-logind[1523]: Session 37 logged out. Waiting for processes to exit. Feb 13 20:31:41.705153 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 20:31:41.706305 systemd-logind[1523]: Removed session 37. Feb 13 20:31:42.616275 kubelet[2665]: E0213 20:31:42.616237 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:46.710837 systemd[1]: Started sshd@37-10.0.0.9:22-10.0.0.1:38740.service - OpenSSH per-connection server daemon (10.0.0.1:38740). Feb 13 20:31:46.745992 sshd[3529]: Accepted publickey for core from 10.0.0.1 port 38740 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:46.747148 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:46.750441 systemd-logind[1523]: New session 38 of user core. Feb 13 20:31:46.757913 systemd[1]: Started session-38.scope - Session 38 of User core. Feb 13 20:31:46.861939 sshd[3529]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:46.864963 systemd[1]: sshd@37-10.0.0.9:22-10.0.0.1:38740.service: Deactivated successfully. Feb 13 20:31:46.867106 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 20:31:46.867203 systemd-logind[1523]: Session 38 logged out. Waiting for processes to exit. Feb 13 20:31:46.868335 systemd-logind[1523]: Removed session 38. Feb 13 20:31:47.617392 kubelet[2665]: E0213 20:31:47.617356 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:48.531136 kubelet[2665]: E0213 20:31:48.530953 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:31:48.532130 containerd[1543]: time="2025-02-13T20:31:48.532011347Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:31:49.673873 containerd[1543]: time="2025-02-13T20:31:49.673812145Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:31:49.674591 containerd[1543]: time="2025-02-13T20:31:49.673896107Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:31:49.674645 kubelet[2665]: E0213 20:31:49.674019 2665 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:31:49.674645 kubelet[2665]: E0213 20:31:49.674064 2665 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:31:49.674940 kubelet[2665]: E0213 20:31:49.674159 2665 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xr76x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-5hbww_kube-flannel(3f8b4fef-0cee-4a5a-b509-119c847b6786): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:31:49.675001 kubelet[2665]: E0213 20:31:49.674188 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:31:51.877834 systemd[1]: Started sshd@38-10.0.0.9:22-10.0.0.1:38744.service - OpenSSH per-connection server daemon (10.0.0.1:38744). Feb 13 20:31:51.912252 sshd[3548]: Accepted publickey for core from 10.0.0.1 port 38744 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:51.913340 sshd[3548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:51.917238 systemd-logind[1523]: New session 39 of user core. Feb 13 20:31:51.927835 systemd[1]: Started session-39.scope - Session 39 of User core. Feb 13 20:31:52.032922 sshd[3548]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:52.036755 systemd[1]: sshd@38-10.0.0.9:22-10.0.0.1:38744.service: Deactivated successfully. Feb 13 20:31:52.038463 systemd-logind[1523]: Session 39 logged out. Waiting for processes to exit. Feb 13 20:31:52.038534 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 20:31:52.039786 systemd-logind[1523]: Removed session 39. Feb 13 20:31:52.618850 kubelet[2665]: E0213 20:31:52.618814 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:57.042353 systemd[1]: Started sshd@39-10.0.0.9:22-10.0.0.1:53450.service - OpenSSH per-connection server daemon (10.0.0.1:53450). Feb 13 20:31:57.076541 sshd[3564]: Accepted publickey for core from 10.0.0.1 port 53450 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:57.077772 sshd[3564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:57.081259 systemd-logind[1523]: New session 40 of user core. Feb 13 20:31:57.091891 systemd[1]: Started session-40.scope - Session 40 of User core. Feb 13 20:31:57.201534 sshd[3564]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:57.206098 systemd[1]: sshd@39-10.0.0.9:22-10.0.0.1:53450.service: Deactivated successfully. Feb 13 20:31:57.208545 systemd-logind[1523]: Session 40 logged out. Waiting for processes to exit. Feb 13 20:31:57.208786 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 20:31:57.209948 systemd-logind[1523]: Removed session 40. Feb 13 20:31:57.619711 kubelet[2665]: E0213 20:31:57.619665 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:02.217849 systemd[1]: Started sshd@40-10.0.0.9:22-10.0.0.1:53460.service - OpenSSH per-connection server daemon (10.0.0.1:53460). Feb 13 20:32:02.254950 sshd[3581]: Accepted publickey for core from 10.0.0.1 port 53460 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:02.256075 sshd[3581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:02.259871 systemd-logind[1523]: New session 41 of user core. Feb 13 20:32:02.276857 systemd[1]: Started session-41.scope - Session 41 of User core. Feb 13 20:32:02.385388 sshd[3581]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:02.393840 systemd[1]: Started sshd@41-10.0.0.9:22-10.0.0.1:53468.service - OpenSSH per-connection server daemon (10.0.0.1:53468). Feb 13 20:32:02.394213 systemd[1]: sshd@40-10.0.0.9:22-10.0.0.1:53460.service: Deactivated successfully. Feb 13 20:32:02.396047 systemd-logind[1523]: Session 41 logged out. Waiting for processes to exit. Feb 13 20:32:02.396592 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 20:32:02.398159 systemd-logind[1523]: Removed session 41. Feb 13 20:32:02.428324 sshd[3595]: Accepted publickey for core from 10.0.0.1 port 53468 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:02.429497 sshd[3595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:02.433744 systemd-logind[1523]: New session 42 of user core. Feb 13 20:32:02.442851 systemd[1]: Started session-42.scope - Session 42 of User core. Feb 13 20:32:02.583443 sshd[3595]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:02.593874 systemd[1]: Started sshd@42-10.0.0.9:22-10.0.0.1:34516.service - OpenSSH per-connection server daemon (10.0.0.1:34516). Feb 13 20:32:02.594252 systemd[1]: sshd@41-10.0.0.9:22-10.0.0.1:53468.service: Deactivated successfully. Feb 13 20:32:02.597078 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 20:32:02.598238 systemd-logind[1523]: Session 42 logged out. Waiting for processes to exit. Feb 13 20:32:02.602375 systemd-logind[1523]: Removed session 42. Feb 13 20:32:02.620546 kubelet[2665]: E0213 20:32:02.620513 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:02.644948 sshd[3609]: Accepted publickey for core from 10.0.0.1 port 34516 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:02.646370 sshd[3609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:02.650411 systemd-logind[1523]: New session 43 of user core. Feb 13 20:32:02.663896 systemd[1]: Started session-43.scope - Session 43 of User core. Feb 13 20:32:02.769618 sshd[3609]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:02.772775 systemd[1]: sshd@42-10.0.0.9:22-10.0.0.1:34516.service: Deactivated successfully. Feb 13 20:32:02.774914 systemd-logind[1523]: Session 43 logged out. Waiting for processes to exit. Feb 13 20:32:02.775063 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 20:32:02.776336 systemd-logind[1523]: Removed session 43. Feb 13 20:32:03.530174 kubelet[2665]: E0213 20:32:03.530020 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:03.530734 kubelet[2665]: E0213 20:32:03.530696 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:32:07.622433 kubelet[2665]: E0213 20:32:07.622379 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:07.780959 systemd[1]: Started sshd@43-10.0.0.9:22-10.0.0.1:34524.service - OpenSSH per-connection server daemon (10.0.0.1:34524). Feb 13 20:32:07.815242 sshd[3627]: Accepted publickey for core from 10.0.0.1 port 34524 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:07.816433 sshd[3627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:07.820220 systemd-logind[1523]: New session 44 of user core. Feb 13 20:32:07.831885 systemd[1]: Started session-44.scope - Session 44 of User core. Feb 13 20:32:07.939827 sshd[3627]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:07.942996 systemd[1]: sshd@43-10.0.0.9:22-10.0.0.1:34524.service: Deactivated successfully. Feb 13 20:32:07.945059 systemd-logind[1523]: Session 44 logged out. Waiting for processes to exit. Feb 13 20:32:07.945139 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 20:32:07.947006 systemd-logind[1523]: Removed session 44. Feb 13 20:32:12.623756 kubelet[2665]: E0213 20:32:12.623711 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:12.954836 systemd[1]: Started sshd@44-10.0.0.9:22-10.0.0.1:51234.service - OpenSSH per-connection server daemon (10.0.0.1:51234). Feb 13 20:32:12.989346 sshd[3642]: Accepted publickey for core from 10.0.0.1 port 51234 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:12.990508 sshd[3642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:12.993908 systemd-logind[1523]: New session 45 of user core. Feb 13 20:32:13.005946 systemd[1]: Started session-45.scope - Session 45 of User core. Feb 13 20:32:13.111036 sshd[3642]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:13.114379 systemd[1]: sshd@44-10.0.0.9:22-10.0.0.1:51234.service: Deactivated successfully. Feb 13 20:32:13.116542 systemd-logind[1523]: Session 45 logged out. Waiting for processes to exit. Feb 13 20:32:13.117173 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 20:32:13.118227 systemd-logind[1523]: Removed session 45. Feb 13 20:32:17.530439 kubelet[2665]: E0213 20:32:17.530389 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:17.531051 kubelet[2665]: E0213 20:32:17.531002 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:32:17.624779 kubelet[2665]: E0213 20:32:17.624736 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:18.130921 systemd[1]: Started sshd@45-10.0.0.9:22-10.0.0.1:51240.service - OpenSSH per-connection server daemon (10.0.0.1:51240). Feb 13 20:32:18.164973 sshd[3658]: Accepted publickey for core from 10.0.0.1 port 51240 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:18.166503 sshd[3658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:18.170326 systemd-logind[1523]: New session 46 of user core. Feb 13 20:32:18.179834 systemd[1]: Started session-46.scope - Session 46 of User core. Feb 13 20:32:18.285806 sshd[3658]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:18.289380 systemd[1]: sshd@45-10.0.0.9:22-10.0.0.1:51240.service: Deactivated successfully. Feb 13 20:32:18.291368 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 20:32:18.291389 systemd-logind[1523]: Session 46 logged out. Waiting for processes to exit. Feb 13 20:32:18.292668 systemd-logind[1523]: Removed session 46. Feb 13 20:32:22.626268 kubelet[2665]: E0213 20:32:22.626223 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:23.298828 systemd[1]: Started sshd@46-10.0.0.9:22-10.0.0.1:43770.service - OpenSSH per-connection server daemon (10.0.0.1:43770). Feb 13 20:32:23.333340 sshd[3675]: Accepted publickey for core from 10.0.0.1 port 43770 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:23.334488 sshd[3675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:23.338375 systemd-logind[1523]: New session 47 of user core. Feb 13 20:32:23.348841 systemd[1]: Started session-47.scope - Session 47 of User core. Feb 13 20:32:23.454766 sshd[3675]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:23.457368 systemd-logind[1523]: Session 47 logged out. Waiting for processes to exit. Feb 13 20:32:23.457579 systemd[1]: sshd@46-10.0.0.9:22-10.0.0.1:43770.service: Deactivated successfully. Feb 13 20:32:23.460061 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 20:32:23.460885 systemd-logind[1523]: Removed session 47. Feb 13 20:32:27.530349 kubelet[2665]: E0213 20:32:27.530320 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:27.627595 kubelet[2665]: E0213 20:32:27.627554 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:28.466837 systemd[1]: Started sshd@47-10.0.0.9:22-10.0.0.1:43780.service - OpenSSH per-connection server daemon (10.0.0.1:43780). Feb 13 20:32:28.501135 sshd[3691]: Accepted publickey for core from 10.0.0.1 port 43780 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:28.502261 sshd[3691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:28.506055 systemd-logind[1523]: New session 48 of user core. Feb 13 20:32:28.517848 systemd[1]: Started session-48.scope - Session 48 of User core. Feb 13 20:32:28.622120 sshd[3691]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:28.625586 systemd[1]: sshd@47-10.0.0.9:22-10.0.0.1:43780.service: Deactivated successfully. Feb 13 20:32:28.627537 systemd-logind[1523]: Session 48 logged out. Waiting for processes to exit. Feb 13 20:32:28.627697 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 20:32:28.628828 systemd-logind[1523]: Removed session 48. Feb 13 20:32:30.531054 kubelet[2665]: E0213 20:32:30.531018 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:30.531722 kubelet[2665]: E0213 20:32:30.531678 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:32:32.629090 kubelet[2665]: E0213 20:32:32.629053 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:33.643861 systemd[1]: Started sshd@48-10.0.0.9:22-10.0.0.1:41058.service - OpenSSH per-connection server daemon (10.0.0.1:41058). Feb 13 20:32:33.680438 sshd[3709]: Accepted publickey for core from 10.0.0.1 port 41058 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:33.681604 sshd[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:33.685661 systemd-logind[1523]: New session 49 of user core. Feb 13 20:32:33.694921 systemd[1]: Started session-49.scope - Session 49 of User core. Feb 13 20:32:33.801247 sshd[3709]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:33.804469 systemd[1]: sshd@48-10.0.0.9:22-10.0.0.1:41058.service: Deactivated successfully. Feb 13 20:32:33.806327 systemd-logind[1523]: Session 49 logged out. Waiting for processes to exit. Feb 13 20:32:33.806396 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 20:32:33.807566 systemd-logind[1523]: Removed session 49. Feb 13 20:32:34.530748 kubelet[2665]: E0213 20:32:34.530716 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:37.629878 kubelet[2665]: E0213 20:32:37.629830 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:38.820848 systemd[1]: Started sshd@49-10.0.0.9:22-10.0.0.1:41068.service - OpenSSH per-connection server daemon (10.0.0.1:41068). Feb 13 20:32:38.855298 sshd[3724]: Accepted publickey for core from 10.0.0.1 port 41068 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:38.856461 sshd[3724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:38.859874 systemd-logind[1523]: New session 50 of user core. Feb 13 20:32:38.868828 systemd[1]: Started session-50.scope - Session 50 of User core. Feb 13 20:32:38.975834 sshd[3724]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:38.979007 systemd[1]: sshd@49-10.0.0.9:22-10.0.0.1:41068.service: Deactivated successfully. Feb 13 20:32:38.980944 systemd-logind[1523]: Session 50 logged out. Waiting for processes to exit. Feb 13 20:32:38.981024 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 20:32:38.982347 systemd-logind[1523]: Removed session 50. Feb 13 20:32:41.530050 kubelet[2665]: E0213 20:32:41.530008 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:41.530794 kubelet[2665]: E0213 20:32:41.530673 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:32:42.630739 kubelet[2665]: E0213 20:32:42.630691 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:43.997850 systemd[1]: Started sshd@50-10.0.0.9:22-10.0.0.1:57790.service - OpenSSH per-connection server daemon (10.0.0.1:57790). Feb 13 20:32:44.032463 sshd[3740]: Accepted publickey for core from 10.0.0.1 port 57790 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:44.033665 sshd[3740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:44.037005 systemd-logind[1523]: New session 51 of user core. Feb 13 20:32:44.046200 systemd[1]: Started session-51.scope - Session 51 of User core. Feb 13 20:32:44.151893 sshd[3740]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:44.155173 systemd[1]: sshd@50-10.0.0.9:22-10.0.0.1:57790.service: Deactivated successfully. Feb 13 20:32:44.155315 systemd-logind[1523]: Session 51 logged out. Waiting for processes to exit. Feb 13 20:32:44.157123 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 20:32:44.158290 systemd-logind[1523]: Removed session 51. Feb 13 20:32:47.631955 kubelet[2665]: E0213 20:32:47.631894 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:49.166856 systemd[1]: Started sshd@51-10.0.0.9:22-10.0.0.1:57796.service - OpenSSH per-connection server daemon (10.0.0.1:57796). Feb 13 20:32:49.201802 sshd[3756]: Accepted publickey for core from 10.0.0.1 port 57796 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:49.202964 sshd[3756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:49.206369 systemd-logind[1523]: New session 52 of user core. Feb 13 20:32:49.213854 systemd[1]: Started session-52.scope - Session 52 of User core. Feb 13 20:32:49.319121 sshd[3756]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:49.322750 systemd[1]: sshd@51-10.0.0.9:22-10.0.0.1:57796.service: Deactivated successfully. Feb 13 20:32:49.324960 systemd-logind[1523]: Session 52 logged out. Waiting for processes to exit. Feb 13 20:32:49.325002 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 20:32:49.326457 systemd-logind[1523]: Removed session 52. Feb 13 20:32:51.530967 kubelet[2665]: E0213 20:32:51.530932 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:52.531021 kubelet[2665]: E0213 20:32:52.530978 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:52.632579 kubelet[2665]: E0213 20:32:52.632504 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:54.330842 systemd[1]: Started sshd@52-10.0.0.9:22-10.0.0.1:53380.service - OpenSSH per-connection server daemon (10.0.0.1:53380). Feb 13 20:32:54.365244 sshd[3773]: Accepted publickey for core from 10.0.0.1 port 53380 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:54.366507 sshd[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:54.370533 systemd-logind[1523]: New session 53 of user core. Feb 13 20:32:54.379851 systemd[1]: Started session-53.scope - Session 53 of User core. Feb 13 20:32:54.485544 sshd[3773]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:54.488304 systemd[1]: sshd@52-10.0.0.9:22-10.0.0.1:53380.service: Deactivated successfully. Feb 13 20:32:54.491369 systemd-logind[1523]: Session 53 logged out. Waiting for processes to exit. Feb 13 20:32:54.491551 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 20:32:54.493156 systemd-logind[1523]: Removed session 53. Feb 13 20:32:56.531141 kubelet[2665]: E0213 20:32:56.531101 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:56.532322 kubelet[2665]: E0213 20:32:56.532227 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:32:57.634223 kubelet[2665]: E0213 20:32:57.634166 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:59.501834 systemd[1]: Started sshd@53-10.0.0.9:22-10.0.0.1:53392.service - OpenSSH per-connection server daemon (10.0.0.1:53392). Feb 13 20:32:59.536146 sshd[3790]: Accepted publickey for core from 10.0.0.1 port 53392 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:59.537327 sshd[3790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:59.541201 systemd-logind[1523]: New session 54 of user core. Feb 13 20:32:59.552900 systemd[1]: Started session-54.scope - Session 54 of User core. Feb 13 20:32:59.658467 sshd[3790]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:59.661436 systemd[1]: sshd@53-10.0.0.9:22-10.0.0.1:53392.service: Deactivated successfully. Feb 13 20:32:59.663288 systemd-logind[1523]: Session 54 logged out. Waiting for processes to exit. Feb 13 20:32:59.663341 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 20:32:59.664281 systemd-logind[1523]: Removed session 54. Feb 13 20:33:02.635613 kubelet[2665]: E0213 20:33:02.635548 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:04.677940 systemd[1]: Started sshd@54-10.0.0.9:22-10.0.0.1:53664.service - OpenSSH per-connection server daemon (10.0.0.1:53664). Feb 13 20:33:04.712172 sshd[3805]: Accepted publickey for core from 10.0.0.1 port 53664 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:04.713300 sshd[3805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:04.717571 systemd-logind[1523]: New session 55 of user core. Feb 13 20:33:04.731857 systemd[1]: Started session-55.scope - Session 55 of User core. Feb 13 20:33:04.837965 sshd[3805]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:04.841124 systemd[1]: sshd@54-10.0.0.9:22-10.0.0.1:53664.service: Deactivated successfully. Feb 13 20:33:04.842992 systemd-logind[1523]: Session 55 logged out. Waiting for processes to exit. Feb 13 20:33:04.843129 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 20:33:04.844040 systemd-logind[1523]: Removed session 55. Feb 13 20:33:07.637308 kubelet[2665]: E0213 20:33:07.637257 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:08.530709 kubelet[2665]: E0213 20:33:08.530665 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:33:08.531571 kubelet[2665]: E0213 20:33:08.531537 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:33:09.847913 systemd[1]: Started sshd@55-10.0.0.9:22-10.0.0.1:53666.service - OpenSSH per-connection server daemon (10.0.0.1:53666). Feb 13 20:33:09.882542 sshd[3820]: Accepted publickey for core from 10.0.0.1 port 53666 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:09.883710 sshd[3820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:09.887055 systemd-logind[1523]: New session 56 of user core. Feb 13 20:33:09.905855 systemd[1]: Started session-56.scope - Session 56 of User core. Feb 13 20:33:10.012456 sshd[3820]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:10.014771 systemd[1]: sshd@55-10.0.0.9:22-10.0.0.1:53666.service: Deactivated successfully. Feb 13 20:33:10.017274 systemd-logind[1523]: Session 56 logged out. Waiting for processes to exit. Feb 13 20:33:10.017415 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 20:33:10.018246 systemd-logind[1523]: Removed session 56. Feb 13 20:33:12.639032 kubelet[2665]: E0213 20:33:12.638992 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:15.022955 systemd[1]: Started sshd@56-10.0.0.9:22-10.0.0.1:44108.service - OpenSSH per-connection server daemon (10.0.0.1:44108). Feb 13 20:33:15.057499 sshd[3835]: Accepted publickey for core from 10.0.0.1 port 44108 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:15.058847 sshd[3835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:15.062242 systemd-logind[1523]: New session 57 of user core. Feb 13 20:33:15.074845 systemd[1]: Started session-57.scope - Session 57 of User core. Feb 13 20:33:15.178400 sshd[3835]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:15.181332 systemd-logind[1523]: Session 57 logged out. Waiting for processes to exit. Feb 13 20:33:15.182208 systemd[1]: sshd@56-10.0.0.9:22-10.0.0.1:44108.service: Deactivated successfully. Feb 13 20:33:15.184565 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 20:33:15.185494 systemd-logind[1523]: Removed session 57. Feb 13 20:33:17.640181 kubelet[2665]: E0213 20:33:17.640118 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:20.195928 systemd[1]: Started sshd@57-10.0.0.9:22-10.0.0.1:44124.service - OpenSSH per-connection server daemon (10.0.0.1:44124). Feb 13 20:33:20.231425 sshd[3852]: Accepted publickey for core from 10.0.0.1 port 44124 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:20.232704 sshd[3852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:20.236781 systemd-logind[1523]: New session 58 of user core. Feb 13 20:33:20.246860 systemd[1]: Started session-58.scope - Session 58 of User core. Feb 13 20:33:20.353940 sshd[3852]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:20.357139 systemd[1]: sshd@57-10.0.0.9:22-10.0.0.1:44124.service: Deactivated successfully. Feb 13 20:33:20.359404 systemd-logind[1523]: Session 58 logged out. Waiting for processes to exit. Feb 13 20:33:20.359406 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 20:33:20.360607 systemd-logind[1523]: Removed session 58. Feb 13 20:33:22.530525 kubelet[2665]: E0213 20:33:22.530486 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:33:22.531766 kubelet[2665]: E0213 20:33:22.531359 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:33:22.641187 kubelet[2665]: E0213 20:33:22.641144 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:25.373033 systemd[1]: Started sshd@58-10.0.0.9:22-10.0.0.1:41634.service - OpenSSH per-connection server daemon (10.0.0.1:41634). Feb 13 20:33:25.407088 sshd[3867]: Accepted publickey for core from 10.0.0.1 port 41634 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:25.408221 sshd[3867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:25.411565 systemd-logind[1523]: New session 59 of user core. Feb 13 20:33:25.417858 systemd[1]: Started session-59.scope - Session 59 of User core. Feb 13 20:33:25.523004 sshd[3867]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:25.525705 systemd[1]: sshd@58-10.0.0.9:22-10.0.0.1:41634.service: Deactivated successfully. Feb 13 20:33:25.528465 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 20:33:25.528660 systemd-logind[1523]: Session 59 logged out. Waiting for processes to exit. Feb 13 20:33:25.529986 systemd-logind[1523]: Removed session 59. Feb 13 20:33:27.642137 kubelet[2665]: E0213 20:33:27.642097 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:30.533856 systemd[1]: Started sshd@59-10.0.0.9:22-10.0.0.1:41644.service - OpenSSH per-connection server daemon (10.0.0.1:41644). Feb 13 20:33:30.568843 sshd[3882]: Accepted publickey for core from 10.0.0.1 port 41644 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:30.569999 sshd[3882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:30.573849 systemd-logind[1523]: New session 60 of user core. Feb 13 20:33:30.585886 systemd[1]: Started session-60.scope - Session 60 of User core. Feb 13 20:33:30.689979 sshd[3882]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:30.693013 systemd[1]: sshd@59-10.0.0.9:22-10.0.0.1:41644.service: Deactivated successfully. Feb 13 20:33:30.694866 systemd-logind[1523]: Session 60 logged out. Waiting for processes to exit. Feb 13 20:33:30.695265 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 20:33:30.696195 systemd-logind[1523]: Removed session 60. Feb 13 20:33:32.643116 kubelet[2665]: E0213 20:33:32.643081 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:33.530450 kubelet[2665]: E0213 20:33:33.530403 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:33:33.530724 kubelet[2665]: E0213 20:33:33.530662 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:33:33.531229 kubelet[2665]: E0213 20:33:33.531158 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:33:35.712852 systemd[1]: Started sshd@60-10.0.0.9:22-10.0.0.1:45294.service - OpenSSH per-connection server daemon (10.0.0.1:45294). Feb 13 20:33:35.749674 sshd[3901]: Accepted publickey for core from 10.0.0.1 port 45294 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:35.750847 sshd[3901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:35.756689 systemd-logind[1523]: New session 61 of user core. Feb 13 20:33:35.770904 systemd[1]: Started session-61.scope - Session 61 of User core. Feb 13 20:33:35.875714 sshd[3901]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:35.878461 systemd[1]: sshd@60-10.0.0.9:22-10.0.0.1:45294.service: Deactivated successfully. Feb 13 20:33:35.880912 systemd-logind[1523]: Session 61 logged out. Waiting for processes to exit. Feb 13 20:33:35.880997 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 20:33:35.882371 systemd-logind[1523]: Removed session 61. Feb 13 20:33:37.644761 kubelet[2665]: E0213 20:33:37.644723 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:40.885843 systemd[1]: Started sshd@61-10.0.0.9:22-10.0.0.1:45300.service - OpenSSH per-connection server daemon (10.0.0.1:45300). Feb 13 20:33:40.920109 sshd[3918]: Accepted publickey for core from 10.0.0.1 port 45300 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:40.921305 sshd[3918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:40.925217 systemd-logind[1523]: New session 62 of user core. Feb 13 20:33:40.935844 systemd[1]: Started session-62.scope - Session 62 of User core. Feb 13 20:33:41.042476 sshd[3918]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:41.044965 systemd[1]: sshd@61-10.0.0.9:22-10.0.0.1:45300.service: Deactivated successfully. Feb 13 20:33:41.047928 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 20:33:41.048390 systemd-logind[1523]: Session 62 logged out. Waiting for processes to exit. Feb 13 20:33:41.049554 systemd-logind[1523]: Removed session 62. Feb 13 20:33:42.645759 kubelet[2665]: E0213 20:33:42.645706 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:45.531287 kubelet[2665]: E0213 20:33:45.531199 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:33:46.052855 systemd[1]: Started sshd@62-10.0.0.9:22-10.0.0.1:51540.service - OpenSSH per-connection server daemon (10.0.0.1:51540). Feb 13 20:33:46.087281 sshd[3934]: Accepted publickey for core from 10.0.0.1 port 51540 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:46.088450 sshd[3934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:46.092704 systemd-logind[1523]: New session 63 of user core. Feb 13 20:33:46.099944 systemd[1]: Started session-63.scope - Session 63 of User core. Feb 13 20:33:46.205821 sshd[3934]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:46.209226 systemd[1]: sshd@62-10.0.0.9:22-10.0.0.1:51540.service: Deactivated successfully. Feb 13 20:33:46.211425 systemd-logind[1523]: Session 63 logged out. Waiting for processes to exit. Feb 13 20:33:46.211908 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 20:33:46.212919 systemd-logind[1523]: Removed session 63. Feb 13 20:33:47.530472 kubelet[2665]: E0213 20:33:47.530420 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:33:47.531113 kubelet[2665]: E0213 20:33:47.531067 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:33:47.647280 kubelet[2665]: E0213 20:33:47.647220 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:51.219860 systemd[1]: Started sshd@63-10.0.0.9:22-10.0.0.1:51542.service - OpenSSH per-connection server daemon (10.0.0.1:51542). Feb 13 20:33:51.254361 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 51542 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:51.255540 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:51.259275 systemd-logind[1523]: New session 64 of user core. Feb 13 20:33:51.269923 systemd[1]: Started session-64.scope - Session 64 of User core. Feb 13 20:33:51.374965 sshd[3953]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:51.377693 systemd[1]: sshd@63-10.0.0.9:22-10.0.0.1:51542.service: Deactivated successfully. Feb 13 20:33:51.380222 systemd-logind[1523]: Session 64 logged out. Waiting for processes to exit. Feb 13 20:33:51.380375 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 20:33:51.382040 systemd-logind[1523]: Removed session 64. Feb 13 20:33:52.647976 kubelet[2665]: E0213 20:33:52.647937 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:55.530590 kubelet[2665]: E0213 20:33:55.530551 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:33:56.389851 systemd[1]: Started sshd@64-10.0.0.9:22-10.0.0.1:46746.service - OpenSSH per-connection server daemon (10.0.0.1:46746). Feb 13 20:33:56.424690 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 46746 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:56.425906 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:56.429705 systemd-logind[1523]: New session 65 of user core. Feb 13 20:33:56.433837 systemd[1]: Started session-65.scope - Session 65 of User core. Feb 13 20:33:56.530673 kubelet[2665]: E0213 20:33:56.530639 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:33:56.539034 sshd[3969]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:56.541539 systemd[1]: sshd@64-10.0.0.9:22-10.0.0.1:46746.service: Deactivated successfully. Feb 13 20:33:56.544250 systemd-logind[1523]: Session 65 logged out. Waiting for processes to exit. Feb 13 20:33:56.544460 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 20:33:56.546425 systemd-logind[1523]: Removed session 65. Feb 13 20:33:57.649503 kubelet[2665]: E0213 20:33:57.649465 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:01.554841 systemd[1]: Started sshd@65-10.0.0.9:22-10.0.0.1:46750.service - OpenSSH per-connection server daemon (10.0.0.1:46750). Feb 13 20:34:01.589319 sshd[3984]: Accepted publickey for core from 10.0.0.1 port 46750 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:01.590530 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:01.594551 systemd-logind[1523]: New session 66 of user core. Feb 13 20:34:01.604947 systemd[1]: Started session-66.scope - Session 66 of User core. Feb 13 20:34:01.712077 sshd[3984]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:01.715352 systemd[1]: sshd@65-10.0.0.9:22-10.0.0.1:46750.service: Deactivated successfully. Feb 13 20:34:01.717355 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 20:34:01.717358 systemd-logind[1523]: Session 66 logged out. Waiting for processes to exit. Feb 13 20:34:01.718556 systemd-logind[1523]: Removed session 66. Feb 13 20:34:02.531602 kubelet[2665]: E0213 20:34:02.531563 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:34:02.532415 kubelet[2665]: E0213 20:34:02.532164 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:34:02.650601 kubelet[2665]: E0213 20:34:02.650568 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:06.723892 systemd[1]: Started sshd@66-10.0.0.9:22-10.0.0.1:55888.service - OpenSSH per-connection server daemon (10.0.0.1:55888). Feb 13 20:34:06.758414 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 55888 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:06.759561 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:06.762892 systemd-logind[1523]: New session 67 of user core. Feb 13 20:34:06.770864 systemd[1]: Started session-67.scope - Session 67 of User core. Feb 13 20:34:06.875970 sshd[3999]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:06.879435 systemd[1]: sshd@66-10.0.0.9:22-10.0.0.1:55888.service: Deactivated successfully. Feb 13 20:34:06.881115 systemd-logind[1523]: Session 67 logged out. Waiting for processes to exit. Feb 13 20:34:06.881245 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 20:34:06.882374 systemd-logind[1523]: Removed session 67. Feb 13 20:34:07.652206 kubelet[2665]: E0213 20:34:07.652156 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:11.886916 systemd[1]: Started sshd@67-10.0.0.9:22-10.0.0.1:55898.service - OpenSSH per-connection server daemon (10.0.0.1:55898). Feb 13 20:34:11.925003 sshd[4014]: Accepted publickey for core from 10.0.0.1 port 55898 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:11.926130 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:11.933226 systemd-logind[1523]: New session 68 of user core. Feb 13 20:34:11.944844 systemd[1]: Started session-68.scope - Session 68 of User core. Feb 13 20:34:12.054868 sshd[4014]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:12.057393 systemd[1]: sshd@67-10.0.0.9:22-10.0.0.1:55898.service: Deactivated successfully. Feb 13 20:34:12.059210 systemd-logind[1523]: Session 68 logged out. Waiting for processes to exit. Feb 13 20:34:12.059849 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 20:34:12.060897 systemd-logind[1523]: Removed session 68. Feb 13 20:34:12.652778 kubelet[2665]: E0213 20:34:12.652735 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:17.065843 systemd[1]: Started sshd@68-10.0.0.9:22-10.0.0.1:47826.service - OpenSSH per-connection server daemon (10.0.0.1:47826). Feb 13 20:34:17.100445 sshd[4030]: Accepted publickey for core from 10.0.0.1 port 47826 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:17.101591 sshd[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:17.105286 systemd-logind[1523]: New session 69 of user core. Feb 13 20:34:17.116847 systemd[1]: Started session-69.scope - Session 69 of User core. Feb 13 20:34:17.220266 sshd[4030]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:17.223824 systemd[1]: sshd@68-10.0.0.9:22-10.0.0.1:47826.service: Deactivated successfully. Feb 13 20:34:17.225703 systemd-logind[1523]: Session 69 logged out. Waiting for processes to exit. Feb 13 20:34:17.225840 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 20:34:17.227039 systemd-logind[1523]: Removed session 69. Feb 13 20:34:17.530283 kubelet[2665]: E0213 20:34:17.530232 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:34:17.531095 kubelet[2665]: E0213 20:34:17.531053 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:34:17.654292 kubelet[2665]: E0213 20:34:17.654208 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:22.230832 systemd[1]: Started sshd@69-10.0.0.9:22-10.0.0.1:47836.service - OpenSSH per-connection server daemon (10.0.0.1:47836). Feb 13 20:34:22.265315 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 47836 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:22.266454 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:22.269819 systemd-logind[1523]: New session 70 of user core. Feb 13 20:34:22.285851 systemd[1]: Started session-70.scope - Session 70 of User core. Feb 13 20:34:22.389230 sshd[4048]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:22.392459 systemd[1]: sshd@69-10.0.0.9:22-10.0.0.1:47836.service: Deactivated successfully. Feb 13 20:34:22.394377 systemd-logind[1523]: Session 70 logged out. Waiting for processes to exit. Feb 13 20:34:22.394424 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 20:34:22.396338 systemd-logind[1523]: Removed session 70. Feb 13 20:34:22.655747 kubelet[2665]: E0213 20:34:22.655645 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:27.399829 systemd[1]: Started sshd@70-10.0.0.9:22-10.0.0.1:58202.service - OpenSSH per-connection server daemon (10.0.0.1:58202). Feb 13 20:34:27.434497 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 58202 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:27.435613 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:27.438840 systemd-logind[1523]: New session 71 of user core. Feb 13 20:34:27.454845 systemd[1]: Started session-71.scope - Session 71 of User core. Feb 13 20:34:27.558888 sshd[4065]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:27.562124 systemd[1]: sshd@70-10.0.0.9:22-10.0.0.1:58202.service: Deactivated successfully. Feb 13 20:34:27.564121 systemd-logind[1523]: Session 71 logged out. Waiting for processes to exit. Feb 13 20:34:27.564501 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 20:34:27.565435 systemd-logind[1523]: Removed session 71. Feb 13 20:34:27.656711 kubelet[2665]: E0213 20:34:27.656613 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:31.530193 kubelet[2665]: E0213 20:34:31.530077 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:34:31.530971 containerd[1543]: time="2025-02-13T20:34:31.530920237Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:34:32.571860 systemd[1]: Started sshd@71-10.0.0.9:22-10.0.0.1:33620.service - OpenSSH per-connection server daemon (10.0.0.1:33620). Feb 13 20:34:32.607441 sshd[4083]: Accepted publickey for core from 10.0.0.1 port 33620 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:32.608549 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:32.611900 systemd-logind[1523]: New session 72 of user core. Feb 13 20:34:32.622873 systemd[1]: Started session-72.scope - Session 72 of User core. Feb 13 20:34:32.646458 containerd[1543]: time="2025-02-13T20:34:32.646372178Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:34:32.646827 containerd[1543]: time="2025-02-13T20:34:32.646461339Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:34:32.646860 kubelet[2665]: E0213 20:34:32.646538 2665 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:34:32.646860 kubelet[2665]: E0213 20:34:32.646578 2665 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:34:32.647133 kubelet[2665]: E0213 20:34:32.646683 2665 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xr76x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-5hbww_kube-flannel(3f8b4fef-0cee-4a5a-b509-119c847b6786): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:34:32.647204 kubelet[2665]: E0213 20:34:32.646710 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:34:32.657238 kubelet[2665]: E0213 20:34:32.657213 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:32.726741 sshd[4083]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:32.729821 systemd[1]: sshd@71-10.0.0.9:22-10.0.0.1:33620.service: Deactivated successfully. Feb 13 20:34:32.731794 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 20:34:32.731809 systemd-logind[1523]: Session 72 logged out. Waiting for processes to exit. Feb 13 20:34:32.733225 systemd-logind[1523]: Removed session 72. Feb 13 20:34:37.658865 kubelet[2665]: E0213 20:34:37.658817 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:37.737835 systemd[1]: Started sshd@72-10.0.0.9:22-10.0.0.1:33634.service - OpenSSH per-connection server daemon (10.0.0.1:33634). Feb 13 20:34:37.772439 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 33634 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:37.773584 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:37.776952 systemd-logind[1523]: New session 73 of user core. Feb 13 20:34:37.788839 systemd[1]: Started session-73.scope - Session 73 of User core. Feb 13 20:34:37.894620 sshd[4098]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:37.897687 systemd[1]: sshd@72-10.0.0.9:22-10.0.0.1:33634.service: Deactivated successfully. Feb 13 20:34:37.899534 systemd-logind[1523]: Session 73 logged out. Waiting for processes to exit. Feb 13 20:34:37.899608 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 20:34:37.900791 systemd-logind[1523]: Removed session 73. Feb 13 20:34:42.660141 kubelet[2665]: E0213 20:34:42.660087 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:42.909926 systemd[1]: Started sshd@73-10.0.0.9:22-10.0.0.1:33750.service - OpenSSH per-connection server daemon (10.0.0.1:33750). Feb 13 20:34:42.945040 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 33750 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:42.946212 sshd[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:42.949693 systemd-logind[1523]: New session 74 of user core. Feb 13 20:34:42.955884 systemd[1]: Started session-74.scope - Session 74 of User core. Feb 13 20:34:43.061817 sshd[4115]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:43.065543 systemd[1]: sshd@73-10.0.0.9:22-10.0.0.1:33750.service: Deactivated successfully. Feb 13 20:34:43.067497 systemd-logind[1523]: Session 74 logged out. Waiting for processes to exit. Feb 13 20:34:43.067575 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 20:34:43.068475 systemd-logind[1523]: Removed session 74. Feb 13 20:34:44.530137 kubelet[2665]: E0213 20:34:44.530091 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:34:44.530755 kubelet[2665]: E0213 20:34:44.530717 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:34:46.530476 kubelet[2665]: E0213 20:34:46.530435 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:34:47.661616 kubelet[2665]: E0213 20:34:47.661556 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:48.077881 systemd[1]: Started sshd@74-10.0.0.9:22-10.0.0.1:33758.service - OpenSSH per-connection server daemon (10.0.0.1:33758). Feb 13 20:34:48.112200 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 33758 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:48.113416 sshd[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:48.116987 systemd-logind[1523]: New session 75 of user core. Feb 13 20:34:48.127865 systemd[1]: Started session-75.scope - Session 75 of User core. Feb 13 20:34:48.233683 sshd[4131]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:48.236717 systemd[1]: sshd@74-10.0.0.9:22-10.0.0.1:33758.service: Deactivated successfully. Feb 13 20:34:48.238568 systemd-logind[1523]: Session 75 logged out. Waiting for processes to exit. Feb 13 20:34:48.238654 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 20:34:48.239702 systemd-logind[1523]: Removed session 75. Feb 13 20:34:52.662775 kubelet[2665]: E0213 20:34:52.662733 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:53.245854 systemd[1]: Started sshd@75-10.0.0.9:22-10.0.0.1:42156.service - OpenSSH per-connection server daemon (10.0.0.1:42156). Feb 13 20:34:53.280350 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 42156 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:53.281561 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:53.285610 systemd-logind[1523]: New session 76 of user core. Feb 13 20:34:53.294907 systemd[1]: Started session-76.scope - Session 76 of User core. Feb 13 20:34:53.404002 sshd[4150]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:53.407130 systemd[1]: sshd@75-10.0.0.9:22-10.0.0.1:42156.service: Deactivated successfully. Feb 13 20:34:53.409275 systemd-logind[1523]: Session 76 logged out. Waiting for processes to exit. Feb 13 20:34:53.409354 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 20:34:53.410256 systemd-logind[1523]: Removed session 76. Feb 13 20:34:56.531005 kubelet[2665]: E0213 20:34:56.530962 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:34:57.530385 kubelet[2665]: E0213 20:34:57.530335 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:34:57.531181 kubelet[2665]: E0213 20:34:57.530968 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:34:57.663759 kubelet[2665]: E0213 20:34:57.663724 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:58.418895 systemd[1]: Started sshd@76-10.0.0.9:22-10.0.0.1:42162.service - OpenSSH per-connection server daemon (10.0.0.1:42162). Feb 13 20:34:58.453507 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 42162 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:58.454756 sshd[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:58.458268 systemd-logind[1523]: New session 77 of user core. Feb 13 20:34:58.471835 systemd[1]: Started session-77.scope - Session 77 of User core. Feb 13 20:34:58.580867 sshd[4165]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:58.583885 systemd[1]: sshd@76-10.0.0.9:22-10.0.0.1:42162.service: Deactivated successfully. Feb 13 20:34:58.586086 systemd-logind[1523]: Session 77 logged out. Waiting for processes to exit. Feb 13 20:34:58.586086 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 20:34:58.587402 systemd-logind[1523]: Removed session 77. Feb 13 20:35:02.664965 kubelet[2665]: E0213 20:35:02.664929 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:03.595967 systemd[1]: Started sshd@77-10.0.0.9:22-10.0.0.1:46362.service - OpenSSH per-connection server daemon (10.0.0.1:46362). Feb 13 20:35:03.630869 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 46362 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:03.632084 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:03.635684 systemd-logind[1523]: New session 78 of user core. Feb 13 20:35:03.641865 systemd[1]: Started session-78.scope - Session 78 of User core. Feb 13 20:35:03.747465 sshd[4180]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:03.754955 systemd[1]: Started sshd@78-10.0.0.9:22-10.0.0.1:46374.service - OpenSSH per-connection server daemon (10.0.0.1:46374). Feb 13 20:35:03.755326 systemd[1]: sshd@77-10.0.0.9:22-10.0.0.1:46362.service: Deactivated successfully. Feb 13 20:35:03.757805 systemd-logind[1523]: Session 78 logged out. Waiting for processes to exit. Feb 13 20:35:03.758196 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 20:35:03.759895 systemd-logind[1523]: Removed session 78. Feb 13 20:35:03.789516 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 46374 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:03.790837 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:03.794689 systemd-logind[1523]: New session 79 of user core. Feb 13 20:35:03.808915 systemd[1]: Started session-79.scope - Session 79 of User core. Feb 13 20:35:03.992091 sshd[4193]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:04.001854 systemd[1]: Started sshd@79-10.0.0.9:22-10.0.0.1:46378.service - OpenSSH per-connection server daemon (10.0.0.1:46378). Feb 13 20:35:04.002599 systemd[1]: sshd@78-10.0.0.9:22-10.0.0.1:46374.service: Deactivated successfully. Feb 13 20:35:04.004030 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 20:35:04.005314 systemd-logind[1523]: Session 79 logged out. Waiting for processes to exit. Feb 13 20:35:04.006279 systemd-logind[1523]: Removed session 79. Feb 13 20:35:04.036856 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 46378 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:04.038256 sshd[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:04.042457 systemd-logind[1523]: New session 80 of user core. Feb 13 20:35:04.066948 systemd[1]: Started session-80.scope - Session 80 of User core. Feb 13 20:35:05.149707 sshd[4207]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:05.156916 systemd[1]: Started sshd@80-10.0.0.9:22-10.0.0.1:46388.service - OpenSSH per-connection server daemon (10.0.0.1:46388). Feb 13 20:35:05.157290 systemd[1]: sshd@79-10.0.0.9:22-10.0.0.1:46378.service: Deactivated successfully. Feb 13 20:35:05.161229 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 20:35:05.161480 systemd-logind[1523]: Session 80 logged out. Waiting for processes to exit. Feb 13 20:35:05.163864 systemd-logind[1523]: Removed session 80. Feb 13 20:35:05.203496 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 46388 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:05.204616 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:05.208684 systemd-logind[1523]: New session 81 of user core. Feb 13 20:35:05.217957 systemd[1]: Started session-81.scope - Session 81 of User core. Feb 13 20:35:05.415828 sshd[4229]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:05.422851 systemd[1]: Started sshd@81-10.0.0.9:22-10.0.0.1:46398.service - OpenSSH per-connection server daemon (10.0.0.1:46398). Feb 13 20:35:05.423760 systemd[1]: sshd@80-10.0.0.9:22-10.0.0.1:46388.service: Deactivated successfully. Feb 13 20:35:05.427770 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 20:35:05.429441 systemd-logind[1523]: Session 81 logged out. Waiting for processes to exit. Feb 13 20:35:05.430279 systemd-logind[1523]: Removed session 81. Feb 13 20:35:05.458906 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 46398 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:05.460030 sshd[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:05.463685 systemd-logind[1523]: New session 82 of user core. Feb 13 20:35:05.475860 systemd[1]: Started session-82.scope - Session 82 of User core. Feb 13 20:35:05.578046 sshd[4244]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:05.581332 systemd[1]: sshd@81-10.0.0.9:22-10.0.0.1:46398.service: Deactivated successfully. Feb 13 20:35:05.583438 systemd-logind[1523]: Session 82 logged out. Waiting for processes to exit. Feb 13 20:35:05.583512 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 20:35:05.585502 systemd-logind[1523]: Removed session 82. Feb 13 20:35:07.668248 kubelet[2665]: E0213 20:35:07.668201 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:10.530284 kubelet[2665]: E0213 20:35:10.530047 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:35:10.531440 kubelet[2665]: E0213 20:35:10.531398 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:35:10.592129 systemd[1]: Started sshd@82-10.0.0.9:22-10.0.0.1:46400.service - OpenSSH per-connection server daemon (10.0.0.1:46400). Feb 13 20:35:10.626404 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 46400 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:10.627550 sshd[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:10.631359 systemd-logind[1523]: New session 83 of user core. Feb 13 20:35:10.638961 systemd[1]: Started session-83.scope - Session 83 of User core. Feb 13 20:35:10.743449 sshd[4262]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:10.746336 systemd-logind[1523]: Session 83 logged out. Waiting for processes to exit. Feb 13 20:35:10.746460 systemd[1]: sshd@82-10.0.0.9:22-10.0.0.1:46400.service: Deactivated successfully. Feb 13 20:35:10.748799 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 20:35:10.749450 systemd-logind[1523]: Removed session 83. Feb 13 20:35:12.669684 kubelet[2665]: E0213 20:35:12.669608 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:15.758859 systemd[1]: Started sshd@83-10.0.0.9:22-10.0.0.1:50496.service - OpenSSH per-connection server daemon (10.0.0.1:50496). Feb 13 20:35:15.793480 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 50496 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:15.794584 sshd[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:15.797948 systemd-logind[1523]: New session 84 of user core. Feb 13 20:35:15.809837 systemd[1]: Started session-84.scope - Session 84 of User core. Feb 13 20:35:15.912851 sshd[4277]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:15.915450 systemd[1]: sshd@83-10.0.0.9:22-10.0.0.1:50496.service: Deactivated successfully. Feb 13 20:35:15.917843 systemd-logind[1523]: Session 84 logged out. Waiting for processes to exit. Feb 13 20:35:15.918008 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 20:35:15.919113 systemd-logind[1523]: Removed session 84. Feb 13 20:35:17.530419 kubelet[2665]: E0213 20:35:17.530355 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:35:17.670531 kubelet[2665]: E0213 20:35:17.670491 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:20.927856 systemd[1]: Started sshd@84-10.0.0.9:22-10.0.0.1:50504.service - OpenSSH per-connection server daemon (10.0.0.1:50504). Feb 13 20:35:20.962410 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 50504 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:20.963587 sshd[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:20.967174 systemd-logind[1523]: New session 85 of user core. Feb 13 20:35:20.977910 systemd[1]: Started session-85.scope - Session 85 of User core. Feb 13 20:35:21.082006 sshd[4295]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:21.085214 systemd[1]: sshd@84-10.0.0.9:22-10.0.0.1:50504.service: Deactivated successfully. Feb 13 20:35:21.087152 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 20:35:21.087171 systemd-logind[1523]: Session 85 logged out. Waiting for processes to exit. Feb 13 20:35:21.088529 systemd-logind[1523]: Removed session 85. Feb 13 20:35:21.530935 kubelet[2665]: E0213 20:35:21.530905 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:35:21.531562 kubelet[2665]: E0213 20:35:21.531509 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:35:22.671651 kubelet[2665]: E0213 20:35:22.671601 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:24.531325 kubelet[2665]: E0213 20:35:24.531233 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:35:26.093907 systemd[1]: Started sshd@85-10.0.0.9:22-10.0.0.1:51716.service - OpenSSH per-connection server daemon (10.0.0.1:51716). Feb 13 20:35:26.128429 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 51716 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:26.129612 sshd[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:26.133335 systemd-logind[1523]: New session 86 of user core. Feb 13 20:35:26.146913 systemd[1]: Started session-86.scope - Session 86 of User core. Feb 13 20:35:26.249644 sshd[4311]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:26.253496 systemd[1]: sshd@85-10.0.0.9:22-10.0.0.1:51716.service: Deactivated successfully. Feb 13 20:35:26.255408 systemd-logind[1523]: Session 86 logged out. Waiting for processes to exit. Feb 13 20:35:26.255690 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 20:35:26.257072 systemd-logind[1523]: Removed session 86. Feb 13 20:35:27.672566 kubelet[2665]: E0213 20:35:27.672514 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:31.268845 systemd[1]: Started sshd@86-10.0.0.9:22-10.0.0.1:51718.service - OpenSSH per-connection server daemon (10.0.0.1:51718). Feb 13 20:35:31.303726 sshd[4327]: Accepted publickey for core from 10.0.0.1 port 51718 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:31.304887 sshd[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:31.308332 systemd-logind[1523]: New session 87 of user core. Feb 13 20:35:31.317845 systemd[1]: Started session-87.scope - Session 87 of User core. Feb 13 20:35:31.422676 sshd[4327]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:31.426351 systemd[1]: sshd@86-10.0.0.9:22-10.0.0.1:51718.service: Deactivated successfully. Feb 13 20:35:31.428282 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 20:35:31.428295 systemd-logind[1523]: Session 87 logged out. Waiting for processes to exit. Feb 13 20:35:31.429613 systemd-logind[1523]: Removed session 87. Feb 13 20:35:32.673825 kubelet[2665]: E0213 20:35:32.673789 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:36.432953 systemd[1]: Started sshd@87-10.0.0.9:22-10.0.0.1:43068.service - OpenSSH per-connection server daemon (10.0.0.1:43068). Feb 13 20:35:36.467357 sshd[4344]: Accepted publickey for core from 10.0.0.1 port 43068 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:36.468583 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:36.472464 systemd-logind[1523]: New session 88 of user core. Feb 13 20:35:36.486975 systemd[1]: Started session-88.scope - Session 88 of User core. Feb 13 20:35:36.531046 kubelet[2665]: E0213 20:35:36.531000 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:35:36.531761 kubelet[2665]: E0213 20:35:36.531539 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:35:36.589702 sshd[4344]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:36.592739 systemd[1]: sshd@87-10.0.0.9:22-10.0.0.1:43068.service: Deactivated successfully. Feb 13 20:35:36.594593 systemd-logind[1523]: Session 88 logged out. Waiting for processes to exit. Feb 13 20:35:36.594787 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 20:35:36.596108 systemd-logind[1523]: Removed session 88. Feb 13 20:35:37.675365 kubelet[2665]: E0213 20:35:37.675288 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:41.605941 systemd[1]: Started sshd@88-10.0.0.9:22-10.0.0.1:43074.service - OpenSSH per-connection server daemon (10.0.0.1:43074). Feb 13 20:35:41.640347 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 43074 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:41.641499 sshd[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:41.644868 systemd-logind[1523]: New session 89 of user core. Feb 13 20:35:41.658850 systemd[1]: Started session-89.scope - Session 89 of User core. Feb 13 20:35:41.760757 sshd[4360]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:41.763899 systemd[1]: sshd@88-10.0.0.9:22-10.0.0.1:43074.service: Deactivated successfully. Feb 13 20:35:41.765938 systemd-logind[1523]: Session 89 logged out. Waiting for processes to exit. Feb 13 20:35:41.765940 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 20:35:41.767909 systemd-logind[1523]: Removed session 89. Feb 13 20:35:42.676081 kubelet[2665]: E0213 20:35:42.676042 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:46.778084 systemd[1]: Started sshd@89-10.0.0.9:22-10.0.0.1:50264.service - OpenSSH per-connection server daemon (10.0.0.1:50264). Feb 13 20:35:46.812439 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 50264 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:46.813558 sshd[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:46.817199 systemd-logind[1523]: New session 90 of user core. Feb 13 20:35:46.822900 systemd[1]: Started session-90.scope - Session 90 of User core. Feb 13 20:35:46.923973 sshd[4375]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:46.927086 systemd[1]: sshd@89-10.0.0.9:22-10.0.0.1:50264.service: Deactivated successfully. Feb 13 20:35:46.929739 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 20:35:46.929916 systemd-logind[1523]: Session 90 logged out. Waiting for processes to exit. Feb 13 20:35:46.930937 systemd-logind[1523]: Removed session 90. Feb 13 20:35:47.677081 kubelet[2665]: E0213 20:35:47.677024 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:50.531097 kubelet[2665]: E0213 20:35:50.530872 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:35:50.531763 kubelet[2665]: E0213 20:35:50.531725 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:35:51.935849 systemd[1]: Started sshd@90-10.0.0.9:22-10.0.0.1:50272.service - OpenSSH per-connection server daemon (10.0.0.1:50272). Feb 13 20:35:51.970318 sshd[4393]: Accepted publickey for core from 10.0.0.1 port 50272 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:51.971523 sshd[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:51.975079 systemd-logind[1523]: New session 91 of user core. Feb 13 20:35:51.983875 systemd[1]: Started session-91.scope - Session 91 of User core. Feb 13 20:35:52.088934 sshd[4393]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:52.091515 systemd[1]: sshd@90-10.0.0.9:22-10.0.0.1:50272.service: Deactivated successfully. Feb 13 20:35:52.094056 systemd-logind[1523]: Session 91 logged out. Waiting for processes to exit. Feb 13 20:35:52.094156 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 20:35:52.095453 systemd-logind[1523]: Removed session 91. Feb 13 20:35:52.678190 kubelet[2665]: E0213 20:35:52.678149 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:54.530308 kubelet[2665]: E0213 20:35:54.530274 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:35:57.098912 systemd[1]: Started sshd@91-10.0.0.9:22-10.0.0.1:35118.service - OpenSSH per-connection server daemon (10.0.0.1:35118). Feb 13 20:35:57.133137 sshd[4409]: Accepted publickey for core from 10.0.0.1 port 35118 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:57.134307 sshd[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:57.137952 systemd-logind[1523]: New session 92 of user core. Feb 13 20:35:57.149925 systemd[1]: Started session-92.scope - Session 92 of User core. Feb 13 20:35:57.253822 sshd[4409]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:57.257202 systemd[1]: sshd@91-10.0.0.9:22-10.0.0.1:35118.service: Deactivated successfully. Feb 13 20:35:57.259262 systemd[1]: session-92.scope: Deactivated successfully. Feb 13 20:35:57.259311 systemd-logind[1523]: Session 92 logged out. Waiting for processes to exit. Feb 13 20:35:57.260609 systemd-logind[1523]: Removed session 92. Feb 13 20:35:57.679906 kubelet[2665]: E0213 20:35:57.679835 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:02.267843 systemd[1]: Started sshd@92-10.0.0.9:22-10.0.0.1:35130.service - OpenSSH per-connection server daemon (10.0.0.1:35130). Feb 13 20:36:02.302715 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 35130 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:02.303940 sshd[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:02.307692 systemd-logind[1523]: New session 93 of user core. Feb 13 20:36:02.319844 systemd[1]: Started session-93.scope - Session 93 of User core. Feb 13 20:36:02.423608 sshd[4425]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:02.426642 systemd[1]: sshd@92-10.0.0.9:22-10.0.0.1:35130.service: Deactivated successfully. Feb 13 20:36:02.428540 systemd-logind[1523]: Session 93 logged out. Waiting for processes to exit. Feb 13 20:36:02.428637 systemd[1]: session-93.scope: Deactivated successfully. Feb 13 20:36:02.429841 systemd-logind[1523]: Removed session 93. Feb 13 20:36:02.530733 kubelet[2665]: E0213 20:36:02.530609 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:36:02.531561 kubelet[2665]: E0213 20:36:02.531509 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:36:02.680704 kubelet[2665]: E0213 20:36:02.680661 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:05.530815 kubelet[2665]: E0213 20:36:05.530722 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:36:07.433909 systemd[1]: Started sshd@93-10.0.0.9:22-10.0.0.1:36180.service - OpenSSH per-connection server daemon (10.0.0.1:36180). Feb 13 20:36:07.468333 sshd[4442]: Accepted publickey for core from 10.0.0.1 port 36180 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:07.469464 sshd[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:07.472992 systemd-logind[1523]: New session 94 of user core. Feb 13 20:36:07.479840 systemd[1]: Started session-94.scope - Session 94 of User core. Feb 13 20:36:07.581132 sshd[4442]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:07.584129 systemd[1]: sshd@93-10.0.0.9:22-10.0.0.1:36180.service: Deactivated successfully. Feb 13 20:36:07.586073 systemd-logind[1523]: Session 94 logged out. Waiting for processes to exit. Feb 13 20:36:07.586145 systemd[1]: session-94.scope: Deactivated successfully. Feb 13 20:36:07.587351 systemd-logind[1523]: Removed session 94. Feb 13 20:36:07.682267 kubelet[2665]: E0213 20:36:07.682232 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:12.590848 systemd[1]: Started sshd@94-10.0.0.9:22-10.0.0.1:51850.service - OpenSSH per-connection server daemon (10.0.0.1:51850). Feb 13 20:36:12.625450 sshd[4457]: Accepted publickey for core from 10.0.0.1 port 51850 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:12.626673 sshd[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:12.631706 systemd-logind[1523]: New session 95 of user core. Feb 13 20:36:12.638896 systemd[1]: Started session-95.scope - Session 95 of User core. Feb 13 20:36:12.683219 kubelet[2665]: E0213 20:36:12.683185 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:12.745763 sshd[4457]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:12.748795 systemd-logind[1523]: Session 95 logged out. Waiting for processes to exit. Feb 13 20:36:12.749352 systemd[1]: sshd@94-10.0.0.9:22-10.0.0.1:51850.service: Deactivated successfully. Feb 13 20:36:12.751829 systemd[1]: session-95.scope: Deactivated successfully. Feb 13 20:36:12.752572 systemd-logind[1523]: Removed session 95. Feb 13 20:36:15.530880 kubelet[2665]: E0213 20:36:15.530840 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:36:15.531486 kubelet[2665]: E0213 20:36:15.531457 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:36:17.684871 kubelet[2665]: E0213 20:36:17.684818 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:17.758942 systemd[1]: Started sshd@95-10.0.0.9:22-10.0.0.1:51864.service - OpenSSH per-connection server daemon (10.0.0.1:51864). Feb 13 20:36:17.793191 sshd[4475]: Accepted publickey for core from 10.0.0.1 port 51864 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:17.794349 sshd[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:17.798638 systemd-logind[1523]: New session 96 of user core. Feb 13 20:36:17.807840 systemd[1]: Started session-96.scope - Session 96 of User core. Feb 13 20:36:17.909350 sshd[4475]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:17.912583 systemd[1]: sshd@95-10.0.0.9:22-10.0.0.1:51864.service: Deactivated successfully. Feb 13 20:36:17.914852 systemd-logind[1523]: Session 96 logged out. Waiting for processes to exit. Feb 13 20:36:17.914930 systemd[1]: session-96.scope: Deactivated successfully. Feb 13 20:36:17.916235 systemd-logind[1523]: Removed session 96. Feb 13 20:36:22.685557 kubelet[2665]: E0213 20:36:22.685514 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:22.923848 systemd[1]: Started sshd@96-10.0.0.9:22-10.0.0.1:41854.service - OpenSSH per-connection server daemon (10.0.0.1:41854). Feb 13 20:36:22.958595 sshd[4493]: Accepted publickey for core from 10.0.0.1 port 41854 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:22.959718 sshd[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:22.963161 systemd-logind[1523]: New session 97 of user core. Feb 13 20:36:22.971926 systemd[1]: Started session-97.scope - Session 97 of User core. Feb 13 20:36:23.076865 sshd[4493]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:23.079571 systemd[1]: sshd@96-10.0.0.9:22-10.0.0.1:41854.service: Deactivated successfully. Feb 13 20:36:23.082234 systemd[1]: session-97.scope: Deactivated successfully. Feb 13 20:36:23.082295 systemd-logind[1523]: Session 97 logged out. Waiting for processes to exit. Feb 13 20:36:23.083389 systemd-logind[1523]: Removed session 97. Feb 13 20:36:27.688549 kubelet[2665]: E0213 20:36:27.688503 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:28.086844 systemd[1]: Started sshd@97-10.0.0.9:22-10.0.0.1:41856.service - OpenSSH per-connection server daemon (10.0.0.1:41856). Feb 13 20:36:28.121154 sshd[4509]: Accepted publickey for core from 10.0.0.1 port 41856 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:28.122287 sshd[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:28.125943 systemd-logind[1523]: New session 98 of user core. Feb 13 20:36:28.140915 systemd[1]: Started session-98.scope - Session 98 of User core. Feb 13 20:36:28.246210 sshd[4509]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:28.249344 systemd[1]: sshd@97-10.0.0.9:22-10.0.0.1:41856.service: Deactivated successfully. Feb 13 20:36:28.251292 systemd[1]: session-98.scope: Deactivated successfully. Feb 13 20:36:28.251297 systemd-logind[1523]: Session 98 logged out. Waiting for processes to exit. Feb 13 20:36:28.252310 systemd-logind[1523]: Removed session 98. Feb 13 20:36:30.531285 kubelet[2665]: E0213 20:36:30.531162 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:36:30.532094 kubelet[2665]: E0213 20:36:30.531882 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:36:32.690067 kubelet[2665]: E0213 20:36:32.690034 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:33.259854 systemd[1]: Started sshd@98-10.0.0.9:22-10.0.0.1:44508.service - OpenSSH per-connection server daemon (10.0.0.1:44508). Feb 13 20:36:33.294233 sshd[4527]: Accepted publickey for core from 10.0.0.1 port 44508 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:33.295371 sshd[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:33.298958 systemd-logind[1523]: New session 99 of user core. Feb 13 20:36:33.305954 systemd[1]: Started session-99.scope - Session 99 of User core. Feb 13 20:36:33.410797 sshd[4527]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:33.414379 systemd[1]: sshd@98-10.0.0.9:22-10.0.0.1:44508.service: Deactivated successfully. Feb 13 20:36:33.416318 systemd-logind[1523]: Session 99 logged out. Waiting for processes to exit. Feb 13 20:36:33.416343 systemd[1]: session-99.scope: Deactivated successfully. Feb 13 20:36:33.417878 systemd-logind[1523]: Removed session 99. Feb 13 20:36:34.531004 kubelet[2665]: E0213 20:36:34.530919 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:36:37.530508 kubelet[2665]: E0213 20:36:37.530464 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:36:37.691576 kubelet[2665]: E0213 20:36:37.691541 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:38.420890 systemd[1]: Started sshd@99-10.0.0.9:22-10.0.0.1:44512.service - OpenSSH per-connection server daemon (10.0.0.1:44512). Feb 13 20:36:38.455476 sshd[4542]: Accepted publickey for core from 10.0.0.1 port 44512 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:38.456610 sshd[4542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:38.460442 systemd-logind[1523]: New session 100 of user core. Feb 13 20:36:38.469956 systemd[1]: Started session-100.scope - Session 100 of User core. Feb 13 20:36:38.576342 sshd[4542]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:38.578908 systemd[1]: sshd@99-10.0.0.9:22-10.0.0.1:44512.service: Deactivated successfully. Feb 13 20:36:38.581657 systemd-logind[1523]: Session 100 logged out. Waiting for processes to exit. Feb 13 20:36:38.582305 systemd[1]: session-100.scope: Deactivated successfully. Feb 13 20:36:38.583150 systemd-logind[1523]: Removed session 100. Feb 13 20:36:41.530980 kubelet[2665]: E0213 20:36:41.530941 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:36:41.531807 kubelet[2665]: E0213 20:36:41.531613 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:36:42.693041 kubelet[2665]: E0213 20:36:42.693002 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:43.593917 systemd[1]: Started sshd@100-10.0.0.9:22-10.0.0.1:42488.service - OpenSSH per-connection server daemon (10.0.0.1:42488). Feb 13 20:36:43.628133 sshd[4557]: Accepted publickey for core from 10.0.0.1 port 42488 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:43.629263 sshd[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:43.633125 systemd-logind[1523]: New session 101 of user core. Feb 13 20:36:43.644921 systemd[1]: Started session-101.scope - Session 101 of User core. Feb 13 20:36:43.749503 sshd[4557]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:43.751908 systemd[1]: sshd@100-10.0.0.9:22-10.0.0.1:42488.service: Deactivated successfully. Feb 13 20:36:43.755110 systemd[1]: session-101.scope: Deactivated successfully. Feb 13 20:36:43.757733 systemd-logind[1523]: Session 101 logged out. Waiting for processes to exit. Feb 13 20:36:43.758502 systemd-logind[1523]: Removed session 101. Feb 13 20:36:47.693977 kubelet[2665]: E0213 20:36:47.693912 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:48.764854 systemd[1]: Started sshd@101-10.0.0.9:22-10.0.0.1:42496.service - OpenSSH per-connection server daemon (10.0.0.1:42496). Feb 13 20:36:48.799137 sshd[4573]: Accepted publickey for core from 10.0.0.1 port 42496 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:48.800273 sshd[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:48.803637 systemd-logind[1523]: New session 102 of user core. Feb 13 20:36:48.813845 systemd[1]: Started session-102.scope - Session 102 of User core. Feb 13 20:36:48.918745 sshd[4573]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:48.922000 systemd[1]: sshd@101-10.0.0.9:22-10.0.0.1:42496.service: Deactivated successfully. Feb 13 20:36:48.924452 systemd-logind[1523]: Session 102 logged out. Waiting for processes to exit. Feb 13 20:36:48.924578 systemd[1]: session-102.scope: Deactivated successfully. Feb 13 20:36:48.926193 systemd-logind[1523]: Removed session 102. Feb 13 20:36:52.695016 kubelet[2665]: E0213 20:36:52.694980 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:53.933837 systemd[1]: Started sshd@102-10.0.0.9:22-10.0.0.1:57220.service - OpenSSH per-connection server daemon (10.0.0.1:57220). Feb 13 20:36:53.969682 sshd[4590]: Accepted publickey for core from 10.0.0.1 port 57220 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:53.970833 sshd[4590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:53.974575 systemd-logind[1523]: New session 103 of user core. Feb 13 20:36:53.981834 systemd[1]: Started session-103.scope - Session 103 of User core. Feb 13 20:36:54.088200 sshd[4590]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:54.091331 systemd[1]: sshd@102-10.0.0.9:22-10.0.0.1:57220.service: Deactivated successfully. Feb 13 20:36:54.093277 systemd-logind[1523]: Session 103 logged out. Waiting for processes to exit. Feb 13 20:36:54.093311 systemd[1]: session-103.scope: Deactivated successfully. Feb 13 20:36:54.095248 systemd-logind[1523]: Removed session 103. Feb 13 20:36:56.531095 kubelet[2665]: E0213 20:36:56.530751 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:36:56.532027 kubelet[2665]: E0213 20:36:56.531682 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:36:57.695930 kubelet[2665]: E0213 20:36:57.695894 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:59.100845 systemd[1]: Started sshd@103-10.0.0.9:22-10.0.0.1:57226.service - OpenSSH per-connection server daemon (10.0.0.1:57226). Feb 13 20:36:59.135399 sshd[4605]: Accepted publickey for core from 10.0.0.1 port 57226 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:59.136603 sshd[4605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:59.140015 systemd-logind[1523]: New session 104 of user core. Feb 13 20:36:59.152843 systemd[1]: Started session-104.scope - Session 104 of User core. Feb 13 20:36:59.264308 sshd[4605]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:59.267282 systemd-logind[1523]: Session 104 logged out. Waiting for processes to exit. Feb 13 20:36:59.267399 systemd[1]: sshd@103-10.0.0.9:22-10.0.0.1:57226.service: Deactivated successfully. Feb 13 20:36:59.269829 systemd[1]: session-104.scope: Deactivated successfully. Feb 13 20:36:59.270506 systemd-logind[1523]: Removed session 104. Feb 13 20:37:02.697426 kubelet[2665]: E0213 20:37:02.697384 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:04.275851 systemd[1]: Started sshd@104-10.0.0.9:22-10.0.0.1:58572.service - OpenSSH per-connection server daemon (10.0.0.1:58572). Feb 13 20:37:04.311225 sshd[4620]: Accepted publickey for core from 10.0.0.1 port 58572 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:04.312407 sshd[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:04.316680 systemd-logind[1523]: New session 105 of user core. Feb 13 20:37:04.331873 systemd[1]: Started session-105.scope - Session 105 of User core. Feb 13 20:37:04.436023 sshd[4620]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:04.439309 systemd[1]: sshd@104-10.0.0.9:22-10.0.0.1:58572.service: Deactivated successfully. Feb 13 20:37:04.441189 systemd-logind[1523]: Session 105 logged out. Waiting for processes to exit. Feb 13 20:37:04.441245 systemd[1]: session-105.scope: Deactivated successfully. Feb 13 20:37:04.442244 systemd-logind[1523]: Removed session 105. Feb 13 20:37:07.698334 kubelet[2665]: E0213 20:37:07.698286 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:09.448840 systemd[1]: Started sshd@105-10.0.0.9:22-10.0.0.1:58588.service - OpenSSH per-connection server daemon (10.0.0.1:58588). Feb 13 20:37:09.483291 sshd[4636]: Accepted publickey for core from 10.0.0.1 port 58588 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:09.484458 sshd[4636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:09.487842 systemd-logind[1523]: New session 106 of user core. Feb 13 20:37:09.497833 systemd[1]: Started session-106.scope - Session 106 of User core. Feb 13 20:37:09.602798 sshd[4636]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:09.606703 systemd[1]: sshd@105-10.0.0.9:22-10.0.0.1:58588.service: Deactivated successfully. Feb 13 20:37:09.608536 systemd-logind[1523]: Session 106 logged out. Waiting for processes to exit. Feb 13 20:37:09.608587 systemd[1]: session-106.scope: Deactivated successfully. Feb 13 20:37:09.609972 systemd-logind[1523]: Removed session 106. Feb 13 20:37:11.531014 kubelet[2665]: E0213 20:37:11.530813 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:37:11.531413 kubelet[2665]: E0213 20:37:11.531378 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:37:12.699823 kubelet[2665]: E0213 20:37:12.699773 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:14.531534 kubelet[2665]: E0213 20:37:14.531200 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:37:14.621969 systemd[1]: Started sshd@106-10.0.0.9:22-10.0.0.1:59366.service - OpenSSH per-connection server daemon (10.0.0.1:59366). Feb 13 20:37:14.656361 sshd[4652]: Accepted publickey for core from 10.0.0.1 port 59366 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:14.657481 sshd[4652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:14.661838 systemd-logind[1523]: New session 107 of user core. Feb 13 20:37:14.677839 systemd[1]: Started session-107.scope - Session 107 of User core. Feb 13 20:37:14.781352 sshd[4652]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:14.784517 systemd[1]: sshd@106-10.0.0.9:22-10.0.0.1:59366.service: Deactivated successfully. Feb 13 20:37:14.786438 systemd-logind[1523]: Session 107 logged out. Waiting for processes to exit. Feb 13 20:37:14.786514 systemd[1]: session-107.scope: Deactivated successfully. Feb 13 20:37:14.787778 systemd-logind[1523]: Removed session 107. Feb 13 20:37:17.701330 kubelet[2665]: E0213 20:37:17.701287 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:19.787827 systemd[1]: Started sshd@107-10.0.0.9:22-10.0.0.1:59376.service - OpenSSH per-connection server daemon (10.0.0.1:59376). Feb 13 20:37:19.822922 sshd[4669]: Accepted publickey for core from 10.0.0.1 port 59376 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:19.824057 sshd[4669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:19.827715 systemd-logind[1523]: New session 108 of user core. Feb 13 20:37:19.840961 systemd[1]: Started session-108.scope - Session 108 of User core. Feb 13 20:37:19.944468 sshd[4669]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:19.948045 systemd[1]: sshd@107-10.0.0.9:22-10.0.0.1:59376.service: Deactivated successfully. Feb 13 20:37:19.949997 systemd-logind[1523]: Session 108 logged out. Waiting for processes to exit. Feb 13 20:37:19.950053 systemd[1]: session-108.scope: Deactivated successfully. Feb 13 20:37:19.951485 systemd-logind[1523]: Removed session 108. Feb 13 20:37:22.531052 kubelet[2665]: E0213 20:37:22.530690 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:37:22.532313 kubelet[2665]: E0213 20:37:22.532063 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:37:22.702725 kubelet[2665]: E0213 20:37:22.702659 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:24.959847 systemd[1]: Started sshd@108-10.0.0.9:22-10.0.0.1:50458.service - OpenSSH per-connection server daemon (10.0.0.1:50458). Feb 13 20:37:24.994361 sshd[4684]: Accepted publickey for core from 10.0.0.1 port 50458 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:24.995543 sshd[4684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:24.999673 systemd-logind[1523]: New session 109 of user core. Feb 13 20:37:25.009841 systemd[1]: Started session-109.scope - Session 109 of User core. Feb 13 20:37:25.116754 sshd[4684]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:25.119742 systemd[1]: sshd@108-10.0.0.9:22-10.0.0.1:50458.service: Deactivated successfully. Feb 13 20:37:25.121812 systemd-logind[1523]: Session 109 logged out. Waiting for processes to exit. Feb 13 20:37:25.121814 systemd[1]: session-109.scope: Deactivated successfully. Feb 13 20:37:25.122954 systemd-logind[1523]: Removed session 109. Feb 13 20:37:27.704238 kubelet[2665]: E0213 20:37:27.704190 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:30.130851 systemd[1]: Started sshd@109-10.0.0.9:22-10.0.0.1:50464.service - OpenSSH per-connection server daemon (10.0.0.1:50464). Feb 13 20:37:30.165154 sshd[4699]: Accepted publickey for core from 10.0.0.1 port 50464 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:30.166285 sshd[4699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:30.169916 systemd-logind[1523]: New session 110 of user core. Feb 13 20:37:30.179838 systemd[1]: Started session-110.scope - Session 110 of User core. Feb 13 20:37:30.284650 sshd[4699]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:30.287969 systemd[1]: sshd@109-10.0.0.9:22-10.0.0.1:50464.service: Deactivated successfully. Feb 13 20:37:30.289994 systemd-logind[1523]: Session 110 logged out. Waiting for processes to exit. Feb 13 20:37:30.290540 systemd[1]: session-110.scope: Deactivated successfully. Feb 13 20:37:30.291333 systemd-logind[1523]: Removed session 110. Feb 13 20:37:32.532261 kubelet[2665]: E0213 20:37:32.532233 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:37:32.705556 kubelet[2665]: E0213 20:37:32.705525 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:35.296853 systemd[1]: Started sshd@110-10.0.0.9:22-10.0.0.1:60318.service - OpenSSH per-connection server daemon (10.0.0.1:60318). Feb 13 20:37:35.331250 sshd[4716]: Accepted publickey for core from 10.0.0.1 port 60318 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:35.332487 sshd[4716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:35.336085 systemd-logind[1523]: New session 111 of user core. Feb 13 20:37:35.348876 systemd[1]: Started session-111.scope - Session 111 of User core. Feb 13 20:37:35.454964 sshd[4716]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:35.458089 systemd[1]: sshd@110-10.0.0.9:22-10.0.0.1:60318.service: Deactivated successfully. Feb 13 20:37:35.460060 systemd-logind[1523]: Session 111 logged out. Waiting for processes to exit. Feb 13 20:37:35.460170 systemd[1]: session-111.scope: Deactivated successfully. Feb 13 20:37:35.461293 systemd-logind[1523]: Removed session 111. Feb 13 20:37:35.530298 kubelet[2665]: E0213 20:37:35.530261 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:37:35.530959 kubelet[2665]: E0213 20:37:35.530929 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:37:36.530580 kubelet[2665]: E0213 20:37:36.530481 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:37:37.706322 kubelet[2665]: E0213 20:37:37.706280 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:40.465873 systemd[1]: Started sshd@111-10.0.0.9:22-10.0.0.1:60326.service - OpenSSH per-connection server daemon (10.0.0.1:60326). Feb 13 20:37:40.500761 sshd[4732]: Accepted publickey for core from 10.0.0.1 port 60326 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:40.501879 sshd[4732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:40.505692 systemd-logind[1523]: New session 112 of user core. Feb 13 20:37:40.512870 systemd[1]: Started session-112.scope - Session 112 of User core. Feb 13 20:37:40.616529 sshd[4732]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:40.619766 systemd[1]: sshd@111-10.0.0.9:22-10.0.0.1:60326.service: Deactivated successfully. Feb 13 20:37:40.621684 systemd-logind[1523]: Session 112 logged out. Waiting for processes to exit. Feb 13 20:37:40.621698 systemd[1]: session-112.scope: Deactivated successfully. Feb 13 20:37:40.622714 systemd-logind[1523]: Removed session 112. Feb 13 20:37:42.707348 kubelet[2665]: E0213 20:37:42.707302 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:45.631843 systemd[1]: Started sshd@112-10.0.0.9:22-10.0.0.1:36290.service - OpenSSH per-connection server daemon (10.0.0.1:36290). Feb 13 20:37:45.666364 sshd[4748]: Accepted publickey for core from 10.0.0.1 port 36290 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:45.667464 sshd[4748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:45.670893 systemd-logind[1523]: New session 113 of user core. Feb 13 20:37:45.680921 systemd[1]: Started session-113.scope - Session 113 of User core. Feb 13 20:37:45.786418 sshd[4748]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:45.789445 systemd[1]: sshd@112-10.0.0.9:22-10.0.0.1:36290.service: Deactivated successfully. Feb 13 20:37:45.791311 systemd[1]: session-113.scope: Deactivated successfully. Feb 13 20:37:45.791340 systemd-logind[1523]: Session 113 logged out. Waiting for processes to exit. Feb 13 20:37:45.792781 systemd-logind[1523]: Removed session 113. Feb 13 20:37:47.708885 kubelet[2665]: E0213 20:37:47.708833 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:48.530944 kubelet[2665]: E0213 20:37:48.530906 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:37:48.531530 kubelet[2665]: E0213 20:37:48.531470 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:37:50.801850 systemd[1]: Started sshd@113-10.0.0.9:22-10.0.0.1:36300.service - OpenSSH per-connection server daemon (10.0.0.1:36300). Feb 13 20:37:50.836272 sshd[4765]: Accepted publickey for core from 10.0.0.1 port 36300 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:50.837544 sshd[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:50.841594 systemd-logind[1523]: New session 114 of user core. Feb 13 20:37:50.852873 systemd[1]: Started session-114.scope - Session 114 of User core. Feb 13 20:37:50.959610 sshd[4765]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:50.962789 systemd[1]: sshd@113-10.0.0.9:22-10.0.0.1:36300.service: Deactivated successfully. Feb 13 20:37:50.964937 systemd[1]: session-114.scope: Deactivated successfully. Feb 13 20:37:50.965043 systemd-logind[1523]: Session 114 logged out. Waiting for processes to exit. Feb 13 20:37:50.966436 systemd-logind[1523]: Removed session 114. Feb 13 20:37:52.710370 kubelet[2665]: E0213 20:37:52.710326 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:55.972854 systemd[1]: Started sshd@114-10.0.0.9:22-10.0.0.1:49946.service - OpenSSH per-connection server daemon (10.0.0.1:49946). Feb 13 20:37:56.008426 sshd[4781]: Accepted publickey for core from 10.0.0.1 port 49946 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:56.009612 sshd[4781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:56.013116 systemd-logind[1523]: New session 115 of user core. Feb 13 20:37:56.020838 systemd[1]: Started session-115.scope - Session 115 of User core. Feb 13 20:37:56.125077 sshd[4781]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:56.127865 systemd[1]: sshd@114-10.0.0.9:22-10.0.0.1:49946.service: Deactivated successfully. Feb 13 20:37:56.130346 systemd-logind[1523]: Session 115 logged out. Waiting for processes to exit. Feb 13 20:37:56.130850 systemd[1]: session-115.scope: Deactivated successfully. Feb 13 20:37:56.132019 systemd-logind[1523]: Removed session 115. Feb 13 20:37:57.711243 kubelet[2665]: E0213 20:37:57.711182 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:59.530757 kubelet[2665]: E0213 20:37:59.530654 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:37:59.536311 kubelet[2665]: E0213 20:37:59.531234 2665 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-5hbww" podUID="3f8b4fef-0cee-4a5a-b509-119c847b6786" Feb 13 20:38:01.138832 systemd[1]: Started sshd@115-10.0.0.9:22-10.0.0.1:49960.service - OpenSSH per-connection server daemon (10.0.0.1:49960). Feb 13 20:38:01.173242 sshd[4797]: Accepted publickey for core from 10.0.0.1 port 49960 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:38:01.174454 sshd[4797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:38:01.178123 systemd-logind[1523]: New session 116 of user core. Feb 13 20:38:01.187844 systemd[1]: Started session-116.scope - Session 116 of User core. Feb 13 20:38:01.292585 sshd[4797]: pam_unix(sshd:session): session closed for user core Feb 13 20:38:01.296379 systemd[1]: sshd@115-10.0.0.9:22-10.0.0.1:49960.service: Deactivated successfully. Feb 13 20:38:01.298245 systemd-logind[1523]: Session 116 logged out. Waiting for processes to exit. Feb 13 20:38:01.298367 systemd[1]: session-116.scope: Deactivated successfully. Feb 13 20:38:01.299150 systemd-logind[1523]: Removed session 116. Feb 13 20:38:02.712328 kubelet[2665]: E0213 20:38:02.712291 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:38:06.306834 systemd[1]: Started sshd@116-10.0.0.9:22-10.0.0.1:56980.service - OpenSSH per-connection server daemon (10.0.0.1:56980). Feb 13 20:38:06.341162 sshd[4813]: Accepted publickey for core from 10.0.0.1 port 56980 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:38:06.342335 sshd[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:38:06.345596 systemd-logind[1523]: New session 117 of user core. Feb 13 20:38:06.355841 systemd[1]: Started session-117.scope - Session 117 of User core. Feb 13 20:38:06.461976 sshd[4813]: pam_unix(sshd:session): session closed for user core Feb 13 20:38:06.465225 systemd[1]: sshd@116-10.0.0.9:22-10.0.0.1:56980.service: Deactivated successfully. Feb 13 20:38:06.467123 systemd-logind[1523]: Session 117 logged out. Waiting for processes to exit. Feb 13 20:38:06.467178 systemd[1]: session-117.scope: Deactivated successfully. Feb 13 20:38:06.468469 systemd-logind[1523]: Removed session 117. Feb 13 20:38:06.530501 kubelet[2665]: E0213 20:38:06.530461 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:38:07.713745 kubelet[2665]: E0213 20:38:07.713656 2665 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"