Feb 13 20:49:59.905086 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:49:59.905106 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 20:49:59.905116 kernel: KASLR enabled Feb 13 20:49:59.905121 kernel: efi: EFI v2.7 by EDK II Feb 13 20:49:59.905127 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 20:49:59.905133 kernel: random: crng init done Feb 13 20:49:59.905140 kernel: ACPI: Early table checksum verification disabled Feb 13 20:49:59.905146 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 20:49:59.905152 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 20:49:59.905160 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:49:59.905166 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:49:59.905172 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:49:59.905189 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:49:59.905196 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:49:59.905203 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:49:59.905212 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:49:59.905218 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:49:59.905225 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:49:59.905231 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 20:49:59.905237 kernel: NUMA: Failed to initialise from firmware Feb 13 20:49:59.905244 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:49:59.905250 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 20:49:59.905256 kernel: Zone ranges: Feb 13 20:49:59.905263 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:49:59.905269 kernel: DMA32 empty Feb 13 20:49:59.905276 kernel: Normal empty Feb 13 20:49:59.905282 kernel: Movable zone start for each node Feb 13 20:49:59.905289 kernel: Early memory node ranges Feb 13 20:49:59.905295 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 20:49:59.905301 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 20:49:59.905307 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 20:49:59.905314 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 20:49:59.905320 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 20:49:59.905326 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 20:49:59.905333 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 20:49:59.905339 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:49:59.905345 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 20:49:59.905353 kernel: psci: probing for conduit method from ACPI. Feb 13 20:49:59.905359 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:49:59.905365 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:49:59.905375 kernel: psci: Trusted OS migration not required Feb 13 20:49:59.905381 kernel: psci: SMC Calling Convention v1.1 Feb 13 20:49:59.905388 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 20:49:59.905396 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:49:59.905403 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:49:59.905410 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 20:49:59.905416 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:49:59.905423 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:49:59.905430 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:49:59.905436 kernel: CPU features: detected: Spectre-v4 Feb 13 20:49:59.905451 kernel: CPU features: detected: Spectre-BHB Feb 13 20:49:59.905458 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:49:59.905465 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:49:59.905474 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:49:59.905481 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:49:59.905487 kernel: alternatives: applying boot alternatives Feb 13 20:49:59.905495 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:49:59.905502 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:49:59.905509 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:49:59.905516 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:49:59.905523 kernel: Fallback order for Node 0: 0 Feb 13 20:49:59.905529 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 20:49:59.905536 kernel: Policy zone: DMA Feb 13 20:49:59.905543 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:49:59.905551 kernel: software IO TLB: area num 4. Feb 13 20:49:59.905557 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 20:49:59.905565 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Feb 13 20:49:59.905571 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 20:49:59.905578 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:49:59.905585 kernel: rcu: RCU event tracing is enabled. Feb 13 20:49:59.905592 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 20:49:59.905599 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:49:59.905606 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:49:59.905613 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:49:59.905619 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 20:49:59.905626 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:49:59.905634 kernel: GICv3: 256 SPIs implemented Feb 13 20:49:59.905641 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:49:59.905647 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:49:59.905654 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:49:59.905661 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 20:49:59.905668 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 20:49:59.905675 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 20:49:59.905681 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 20:49:59.905688 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 20:49:59.905695 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 20:49:59.905702 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:49:59.905710 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:49:59.905716 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:49:59.905723 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:49:59.905730 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:49:59.905737 kernel: arm-pv: using stolen time PV Feb 13 20:49:59.905744 kernel: Console: colour dummy device 80x25 Feb 13 20:49:59.905751 kernel: ACPI: Core revision 20230628 Feb 13 20:49:59.905758 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:49:59.905765 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:49:59.905772 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:49:59.905780 kernel: landlock: Up and running. Feb 13 20:49:59.905787 kernel: SELinux: Initializing. Feb 13 20:49:59.905794 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:49:59.905801 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:49:59.905808 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:49:59.905815 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:49:59.905822 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:49:59.905829 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:49:59.905836 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 20:49:59.905844 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 20:49:59.905851 kernel: Remapping and enabling EFI services. Feb 13 20:49:59.905858 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:49:59.905865 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:49:59.905872 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 20:49:59.905879 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 20:49:59.905885 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:49:59.905892 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:49:59.905899 kernel: Detected PIPT I-cache on CPU2 Feb 13 20:49:59.905906 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 20:49:59.905914 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 20:49:59.905921 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:49:59.905932 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 20:49:59.905941 kernel: Detected PIPT I-cache on CPU3 Feb 13 20:49:59.905948 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 20:49:59.905956 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 20:49:59.905963 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:49:59.905970 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 20:49:59.905978 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 20:49:59.905987 kernel: SMP: Total of 4 processors activated. Feb 13 20:49:59.905994 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:49:59.906002 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:49:59.906009 kernel: CPU features: detected: Common not Private translations Feb 13 20:49:59.906017 kernel: CPU features: detected: CRC32 instructions Feb 13 20:49:59.906024 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 20:49:59.906031 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:49:59.906039 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:49:59.906047 kernel: CPU features: detected: Privileged Access Never Feb 13 20:49:59.906055 kernel: CPU features: detected: RAS Extension Support Feb 13 20:49:59.906065 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 20:49:59.906073 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:49:59.906080 kernel: alternatives: applying system-wide alternatives Feb 13 20:49:59.906087 kernel: devtmpfs: initialized Feb 13 20:49:59.906094 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:49:59.906102 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 20:49:59.906109 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:49:59.906118 kernel: SMBIOS 3.0.0 present. Feb 13 20:49:59.906125 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 20:49:59.906132 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:49:59.906140 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:49:59.906148 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:49:59.906155 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:49:59.906162 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:49:59.906170 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Feb 13 20:49:59.906182 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:49:59.906191 kernel: cpuidle: using governor menu Feb 13 20:49:59.906198 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:49:59.906206 kernel: ASID allocator initialised with 32768 entries Feb 13 20:49:59.906213 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:49:59.906220 kernel: Serial: AMBA PL011 UART driver Feb 13 20:49:59.906227 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:49:59.906234 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:49:59.906242 kernel: Modules: 509040 pages in range for PLT usage Feb 13 20:49:59.906249 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:49:59.906257 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:49:59.906265 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:49:59.906272 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:49:59.906279 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:49:59.906286 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:49:59.906294 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:49:59.906301 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:49:59.906308 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:49:59.906315 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:49:59.906324 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:49:59.906332 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:49:59.906339 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:49:59.906347 kernel: ACPI: Interpreter enabled Feb 13 20:49:59.906356 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:49:59.906364 kernel: ACPI: MCFG table detected, 1 entries Feb 13 20:49:59.906371 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:49:59.906381 kernel: printk: console [ttyAMA0] enabled Feb 13 20:49:59.906392 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:49:59.906540 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:49:59.906613 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 20:49:59.906707 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 20:49:59.906776 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 20:49:59.906842 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 20:49:59.906852 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 20:49:59.906860 kernel: PCI host bridge to bus 0000:00 Feb 13 20:49:59.906933 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 20:49:59.906995 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 20:49:59.907055 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 20:49:59.907117 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:49:59.907226 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 20:49:59.907306 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:49:59.907378 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 20:49:59.907454 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 20:49:59.907524 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:49:59.907592 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:49:59.907659 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 20:49:59.907739 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 20:49:59.907802 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 20:49:59.907865 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 20:49:59.907925 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 20:49:59.907935 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 20:49:59.907942 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 20:49:59.907950 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 20:49:59.907957 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 20:49:59.907965 kernel: iommu: Default domain type: Translated Feb 13 20:49:59.907973 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:49:59.907981 kernel: efivars: Registered efivars operations Feb 13 20:49:59.907991 kernel: vgaarb: loaded Feb 13 20:49:59.907998 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:49:59.908006 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:49:59.908013 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:49:59.908021 kernel: pnp: PnP ACPI init Feb 13 20:49:59.908096 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 20:49:59.908107 kernel: pnp: PnP ACPI: found 1 devices Feb 13 20:49:59.908114 kernel: NET: Registered PF_INET protocol family Feb 13 20:49:59.908124 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:49:59.908131 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:49:59.908139 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:49:59.908147 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:49:59.908154 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:49:59.908162 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:49:59.908169 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:49:59.908185 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:49:59.908193 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:49:59.908202 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:49:59.908210 kernel: kvm [1]: HYP mode not available Feb 13 20:49:59.908217 kernel: Initialise system trusted keyrings Feb 13 20:49:59.908224 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:49:59.908232 kernel: Key type asymmetric registered Feb 13 20:49:59.908240 kernel: Asymmetric key parser 'x509' registered Feb 13 20:49:59.908247 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:49:59.908255 kernel: io scheduler mq-deadline registered Feb 13 20:49:59.908262 kernel: io scheduler kyber registered Feb 13 20:49:59.908271 kernel: io scheduler bfq registered Feb 13 20:49:59.908279 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 20:49:59.908286 kernel: ACPI: button: Power Button [PWRB] Feb 13 20:49:59.908294 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 20:49:59.908363 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 20:49:59.908373 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:49:59.908381 kernel: thunder_xcv, ver 1.0 Feb 13 20:49:59.908388 kernel: thunder_bgx, ver 1.0 Feb 13 20:49:59.908395 kernel: nicpf, ver 1.0 Feb 13 20:49:59.908405 kernel: nicvf, ver 1.0 Feb 13 20:49:59.908483 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:49:59.908547 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:49:59 UTC (1739479799) Feb 13 20:49:59.908557 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:49:59.908565 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 20:49:59.908572 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:49:59.908580 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:49:59.908587 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:49:59.908600 kernel: Segment Routing with IPv6 Feb 13 20:49:59.908608 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:49:59.908615 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:49:59.908623 kernel: Key type dns_resolver registered Feb 13 20:49:59.908630 kernel: registered taskstats version 1 Feb 13 20:49:59.908637 kernel: Loading compiled-in X.509 certificates Feb 13 20:49:59.908645 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 20:49:59.908652 kernel: Key type .fscrypt registered Feb 13 20:49:59.908660 kernel: Key type fscrypt-provisioning registered Feb 13 20:49:59.908669 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:49:59.908676 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:49:59.908684 kernel: ima: No architecture policies found Feb 13 20:49:59.908692 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:49:59.908699 kernel: clk: Disabling unused clocks Feb 13 20:49:59.908707 kernel: Freeing unused kernel memory: 39360K Feb 13 20:49:59.908714 kernel: Run /init as init process Feb 13 20:49:59.908722 kernel: with arguments: Feb 13 20:49:59.908729 kernel: /init Feb 13 20:49:59.908738 kernel: with environment: Feb 13 20:49:59.908745 kernel: HOME=/ Feb 13 20:49:59.908753 kernel: TERM=linux Feb 13 20:49:59.908760 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:49:59.908769 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:49:59.908779 systemd[1]: Detected virtualization kvm. Feb 13 20:49:59.908787 systemd[1]: Detected architecture arm64. Feb 13 20:49:59.908795 systemd[1]: Running in initrd. Feb 13 20:49:59.908804 systemd[1]: No hostname configured, using default hostname. Feb 13 20:49:59.908812 systemd[1]: Hostname set to . Feb 13 20:49:59.908820 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:49:59.908828 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:49:59.908836 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:49:59.908844 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:49:59.908853 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:49:59.908861 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:49:59.908870 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:49:59.908878 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:49:59.908888 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:49:59.908896 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:49:59.908904 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:49:59.908913 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:49:59.908922 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:49:59.908930 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:49:59.908937 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:49:59.908945 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:49:59.908953 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:49:59.908961 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:49:59.908969 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:49:59.908977 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:49:59.908985 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:49:59.908995 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:49:59.909003 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:49:59.909011 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:49:59.909019 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:49:59.909027 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:49:59.909035 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:49:59.909043 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:49:59.909051 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:49:59.909059 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:49:59.909069 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:49:59.909077 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:49:59.909088 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:49:59.909102 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:49:59.909110 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:49:59.909134 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 20:49:59.909154 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:49:59.909162 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:49:59.909173 systemd-journald[237]: Journal started Feb 13 20:49:59.909201 systemd-journald[237]: Runtime Journal (/run/log/journal/3426b64876c147b5bea552fbebc6c98f) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:49:59.896763 systemd-modules-load[238]: Inserted module 'overlay' Feb 13 20:49:59.914150 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:49:59.914171 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:49:59.914199 kernel: Bridge firewalling registered Feb 13 20:49:59.913000 systemd-modules-load[238]: Inserted module 'br_netfilter' Feb 13 20:49:59.914083 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:49:59.916159 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:49:59.918313 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:49:59.920088 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:49:59.924880 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:49:59.928475 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:49:59.931510 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:49:59.933560 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:49:59.935493 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:49:59.944293 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:49:59.946495 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:49:59.955587 dracut-cmdline[274]: dracut-dracut-053 Feb 13 20:49:59.958018 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:49:59.974792 systemd-resolved[276]: Positive Trust Anchors: Feb 13 20:49:59.974806 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:49:59.974838 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:49:59.979430 systemd-resolved[276]: Defaulting to hostname 'linux'. Feb 13 20:49:59.980471 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:49:59.983708 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:50:00.026219 kernel: SCSI subsystem initialized Feb 13 20:50:00.031196 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:50:00.038203 kernel: iscsi: registered transport (tcp) Feb 13 20:50:00.051195 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:50:00.051209 kernel: QLogic iSCSI HBA Driver Feb 13 20:50:00.094408 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:50:00.104347 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:50:00.120797 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:50:00.120857 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:50:00.120874 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:50:00.169222 kernel: raid6: neonx8 gen() 14110 MB/s Feb 13 20:50:00.186215 kernel: raid6: neonx4 gen() 15643 MB/s Feb 13 20:50:00.203210 kernel: raid6: neonx2 gen() 13227 MB/s Feb 13 20:50:00.220209 kernel: raid6: neonx1 gen() 10482 MB/s Feb 13 20:50:00.237199 kernel: raid6: int64x8 gen() 6956 MB/s Feb 13 20:50:00.254201 kernel: raid6: int64x4 gen() 7341 MB/s Feb 13 20:50:00.271199 kernel: raid6: int64x2 gen() 6123 MB/s Feb 13 20:50:00.288285 kernel: raid6: int64x1 gen() 5053 MB/s Feb 13 20:50:00.288303 kernel: raid6: using algorithm neonx4 gen() 15643 MB/s Feb 13 20:50:00.306261 kernel: raid6: .... xor() 12222 MB/s, rmw enabled Feb 13 20:50:00.306276 kernel: raid6: using neon recovery algorithm Feb 13 20:50:00.312249 kernel: xor: measuring software checksum speed Feb 13 20:50:00.312266 kernel: 8regs : 19731 MB/sec Feb 13 20:50:00.313535 kernel: 32regs : 19646 MB/sec Feb 13 20:50:00.313547 kernel: arm64_neon : 26831 MB/sec Feb 13 20:50:00.313557 kernel: xor: using function: arm64_neon (26831 MB/sec) Feb 13 20:50:00.373212 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:50:00.384767 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:50:00.397319 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:50:00.409337 systemd-udevd[459]: Using default interface naming scheme 'v255'. Feb 13 20:50:00.412548 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:50:00.422322 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:50:00.434379 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Feb 13 20:50:00.461908 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:50:00.472320 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:50:00.512689 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:50:00.522389 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:50:00.532886 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:50:00.534662 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:50:00.537086 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:50:00.539922 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:50:00.547313 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:50:00.558712 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:50:00.562080 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 20:50:00.572692 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 20:50:00.572781 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:50:00.572792 kernel: GPT:9289727 != 19775487 Feb 13 20:50:00.572801 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:50:00.572810 kernel: GPT:9289727 != 19775487 Feb 13 20:50:00.572819 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:50:00.572833 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:50:00.565528 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:50:00.565634 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:50:00.572682 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:50:00.573785 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:50:00.573920 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:50:00.576384 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:50:00.590463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:50:00.594354 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (518) Feb 13 20:50:00.594375 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (525) Feb 13 20:50:00.602370 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:50:00.607544 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:50:00.612474 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:50:00.619107 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:50:00.620392 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:50:00.626662 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:50:00.640345 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:50:00.642264 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:50:00.647996 disk-uuid[552]: Primary Header is updated. Feb 13 20:50:00.647996 disk-uuid[552]: Secondary Entries is updated. Feb 13 20:50:00.647996 disk-uuid[552]: Secondary Header is updated. Feb 13 20:50:00.651206 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:50:00.667320 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:50:01.662194 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:50:01.663447 disk-uuid[553]: The operation has completed successfully. Feb 13 20:50:01.686602 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:50:01.686731 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:50:01.704344 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:50:01.707155 sh[575]: Success Feb 13 20:50:01.718217 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:50:01.758610 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:50:01.760447 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:50:01.761463 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:50:01.773284 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 20:50:01.773322 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:50:01.773333 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:50:01.773344 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:50:01.774719 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:50:01.778728 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:50:01.779800 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:50:01.780532 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:50:01.782070 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:50:01.793007 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:50:01.793043 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:50:01.793054 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:50:01.796194 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:50:01.805784 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:50:01.808196 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:50:01.814108 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:50:01.819353 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:50:01.892752 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:50:01.905341 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:50:01.922401 ignition[672]: Ignition 2.19.0 Feb 13 20:50:01.922409 ignition[672]: Stage: fetch-offline Feb 13 20:50:01.922451 ignition[672]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:01.922460 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:01.922636 ignition[672]: parsed url from cmdline: "" Feb 13 20:50:01.922639 ignition[672]: no config URL provided Feb 13 20:50:01.922644 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:50:01.922651 ignition[672]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:50:01.922673 ignition[672]: op(1): [started] loading QEMU firmware config module Feb 13 20:50:01.922677 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 20:50:01.933622 systemd-networkd[768]: lo: Link UP Feb 13 20:50:01.933632 systemd-networkd[768]: lo: Gained carrier Feb 13 20:50:01.934315 systemd-networkd[768]: Enumeration completed Feb 13 20:50:01.939765 ignition[672]: op(1): [finished] loading QEMU firmware config module Feb 13 20:50:01.934396 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:50:01.935115 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:50:01.935119 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:50:01.935508 systemd[1]: Reached target network.target - Network. Feb 13 20:50:01.936153 systemd-networkd[768]: eth0: Link UP Feb 13 20:50:01.936156 systemd-networkd[768]: eth0: Gained carrier Feb 13 20:50:01.936163 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:50:01.967237 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:50:01.968592 ignition[672]: parsing config with SHA512: 0b1f91efcc610af1c5a28c00b937f5364d2fe1cc9e9d6eb7a34f7f943dc1ae09abca69c52c0ba490159b7abbe91fa73b87776f822988d59c7fac9f8f6381af6c Feb 13 20:50:01.974199 unknown[672]: fetched base config from "system" Feb 13 20:50:01.974210 unknown[672]: fetched user config from "qemu" Feb 13 20:50:01.974628 ignition[672]: fetch-offline: fetch-offline passed Feb 13 20:50:01.976496 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:50:01.974691 ignition[672]: Ignition finished successfully Feb 13 20:50:01.978089 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:50:01.986361 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:50:01.996542 ignition[775]: Ignition 2.19.0 Feb 13 20:50:01.996552 ignition[775]: Stage: kargs Feb 13 20:50:01.996712 ignition[775]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:01.996727 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:01.997584 ignition[775]: kargs: kargs passed Feb 13 20:50:02.000519 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:50:01.997634 ignition[775]: Ignition finished successfully Feb 13 20:50:02.016447 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:50:02.026102 ignition[783]: Ignition 2.19.0 Feb 13 20:50:02.026111 ignition[783]: Stage: disks Feb 13 20:50:02.026311 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:02.026321 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:02.027133 ignition[783]: disks: disks passed Feb 13 20:50:02.029580 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:50:02.027228 ignition[783]: Ignition finished successfully Feb 13 20:50:02.031091 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:50:02.032736 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:50:02.034528 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:50:02.036364 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:50:02.038234 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:50:02.054342 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:50:02.064926 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:50:02.069147 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:50:02.076288 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:50:02.118194 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 20:50:02.118556 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:50:02.119831 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:50:02.132269 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:50:02.134032 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:50:02.135497 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:50:02.135542 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:50:02.146237 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Feb 13 20:50:02.146401 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:50:02.146417 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:50:02.146433 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:50:02.146444 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:50:02.135564 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:50:02.143094 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:50:02.147763 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:50:02.150244 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:50:02.192244 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:50:02.196346 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:50:02.200083 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:50:02.203814 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:50:02.268656 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:50:02.280284 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:50:02.282656 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:50:02.288190 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:50:02.303354 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:50:02.305097 ignition[915]: INFO : Ignition 2.19.0 Feb 13 20:50:02.305097 ignition[915]: INFO : Stage: mount Feb 13 20:50:02.305097 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:02.305097 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:02.305097 ignition[915]: INFO : mount: mount passed Feb 13 20:50:02.305097 ignition[915]: INFO : Ignition finished successfully Feb 13 20:50:02.305902 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:50:02.313285 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:50:02.771233 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:50:02.781342 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:50:02.787747 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Feb 13 20:50:02.787775 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:50:02.787786 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:50:02.789249 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:50:02.791194 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:50:02.792488 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:50:02.807472 ignition[943]: INFO : Ignition 2.19.0 Feb 13 20:50:02.807472 ignition[943]: INFO : Stage: files Feb 13 20:50:02.808950 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:02.808950 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:02.808950 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:50:02.812258 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:50:02.812258 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:50:02.812258 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:50:02.812258 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:50:02.812258 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:50:02.812258 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:50:02.812258 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 20:50:02.811218 unknown[943]: wrote ssh authorized keys file for user: core Feb 13 20:50:02.859375 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:50:03.244202 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:50:03.244202 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:50:03.247943 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:50:03.247943 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:50:03.247943 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:50:03.247943 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:50:03.247943 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:50:03.247943 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:50:03.247943 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:50:03.247943 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:50:03.247943 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:50:03.247943 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:50:03.247943 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:50:03.247943 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:50:03.247943 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 20:50:03.250335 systemd-networkd[768]: eth0: Gained IPv6LL Feb 13 20:50:03.563286 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:50:03.779517 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:50:03.779517 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:50:03.783122 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:50:03.783122 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:50:03.783122 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:50:03.783122 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 20:50:03.783122 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:50:03.783122 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:50:03.783122 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 20:50:03.783122 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 20:50:03.804760 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:50:03.808604 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:50:03.811333 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 20:50:03.811333 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:50:03.811333 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:50:03.811333 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:50:03.811333 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:50:03.811333 ignition[943]: INFO : files: files passed Feb 13 20:50:03.811333 ignition[943]: INFO : Ignition finished successfully Feb 13 20:50:03.811829 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:50:03.821554 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:50:03.824127 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:50:03.825719 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:50:03.827235 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:50:03.831932 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 20:50:03.835488 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:50:03.835488 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:50:03.838590 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:50:03.837899 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:50:03.841497 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:50:03.855371 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:50:03.875079 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:50:03.875209 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:50:03.877473 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:50:03.879210 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:50:03.880991 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:50:03.881711 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:50:03.897277 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:50:03.909345 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:50:03.916786 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:50:03.918079 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:50:03.920259 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:50:03.922144 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:50:03.922278 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:50:03.924922 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:50:03.926999 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:50:03.928681 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:50:03.930377 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:50:03.932347 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:50:03.934283 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:50:03.936150 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:50:03.938123 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:50:03.940083 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:50:03.941827 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:50:03.943316 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:50:03.943460 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:50:03.945727 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:50:03.947699 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:50:03.949626 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:50:03.954227 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:50:03.955484 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:50:03.955599 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:50:03.958388 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:50:03.958517 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:50:03.960452 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:50:03.961970 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:50:03.966237 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:50:03.967527 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:50:03.969658 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:50:03.971208 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:50:03.971306 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:50:03.972861 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:50:03.972944 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:50:03.974474 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:50:03.974586 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:50:03.976351 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:50:03.976471 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:50:03.990354 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:50:03.991256 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:50:03.991395 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:50:03.994075 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:50:03.994967 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:50:03.995102 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:50:04.001143 ignition[998]: INFO : Ignition 2.19.0 Feb 13 20:50:04.001143 ignition[998]: INFO : Stage: umount Feb 13 20:50:04.001143 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:04.001143 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:03.997192 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:50:04.007874 ignition[998]: INFO : umount: umount passed Feb 13 20:50:04.007874 ignition[998]: INFO : Ignition finished successfully Feb 13 20:50:03.997363 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:50:04.002823 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:50:04.002911 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:50:04.007086 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:50:04.007168 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:50:04.009570 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:50:04.010345 systemd[1]: Stopped target network.target - Network. Feb 13 20:50:04.011509 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:50:04.011576 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:50:04.013391 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:50:04.013447 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:50:04.015139 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:50:04.015194 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:50:04.017112 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:50:04.017160 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:50:04.019330 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:50:04.021086 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:50:04.022935 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:50:04.023025 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:50:04.024833 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:50:04.024921 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:50:04.026226 systemd-networkd[768]: eth0: DHCPv6 lease lost Feb 13 20:50:04.027649 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:50:04.029056 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:50:04.030440 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:50:04.030529 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:50:04.033449 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:50:04.033495 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:50:04.043272 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:50:04.044391 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:50:04.044462 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:50:04.046460 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:50:04.046505 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:50:04.048254 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:50:04.048300 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:50:04.050209 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:50:04.050254 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:50:04.052370 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:50:04.061943 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:50:04.062048 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:50:04.063984 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:50:04.064093 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:50:04.066699 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:50:04.066747 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:50:04.068049 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:50:04.068083 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:50:04.070161 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:50:04.070249 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:50:04.072955 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:50:04.072999 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:50:04.075973 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:50:04.076019 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:50:04.090314 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:50:04.091342 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:50:04.091400 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:50:04.093459 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:50:04.093503 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:50:04.097790 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:50:04.097895 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:50:04.100717 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:50:04.102477 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:50:04.111427 systemd[1]: Switching root. Feb 13 20:50:04.139972 systemd-journald[237]: Journal stopped Feb 13 20:50:04.821717 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 20:50:04.821774 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:50:04.821790 kernel: SELinux: policy capability open_perms=1 Feb 13 20:50:04.821801 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:50:04.821814 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:50:04.821827 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:50:04.821840 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:50:04.821850 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:50:04.821860 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:50:04.821869 kernel: audit: type=1403 audit(1739479804.279:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:50:04.821880 systemd[1]: Successfully loaded SELinux policy in 34.029ms. Feb 13 20:50:04.821896 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.336ms. Feb 13 20:50:04.821909 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:50:04.821921 systemd[1]: Detected virtualization kvm. Feb 13 20:50:04.821933 systemd[1]: Detected architecture arm64. Feb 13 20:50:04.821944 systemd[1]: Detected first boot. Feb 13 20:50:04.821956 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:50:04.821967 zram_generator::config[1044]: No configuration found. Feb 13 20:50:04.821991 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:50:04.822002 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:50:04.822013 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:50:04.822024 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:50:04.822037 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:50:04.822049 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:50:04.822059 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:50:04.822071 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:50:04.822082 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:50:04.822093 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:50:04.822103 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:50:04.822114 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:50:04.822125 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:50:04.822137 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:50:04.822148 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:50:04.822160 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:50:04.822172 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:50:04.822288 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:50:04.822302 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 20:50:04.822313 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:50:04.822323 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:50:04.822334 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:50:04.822348 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:50:04.822359 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:50:04.822369 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:50:04.822380 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:50:04.822391 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:50:04.822409 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:50:04.822420 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:50:04.822433 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:50:04.822444 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:50:04.822454 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:50:04.822465 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:50:04.822476 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:50:04.822487 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:50:04.822498 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:50:04.822510 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:50:04.822520 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:50:04.822531 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:50:04.822543 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:50:04.822555 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:50:04.822566 systemd[1]: Reached target machines.target - Containers. Feb 13 20:50:04.822576 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:50:04.822587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:50:04.822598 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:50:04.822609 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:50:04.822620 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:50:04.822632 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:50:04.822643 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:50:04.822653 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:50:04.822664 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:50:04.822675 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:50:04.822685 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:50:04.822696 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:50:04.822706 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:50:04.822718 kernel: loop: module loaded Feb 13 20:50:04.822728 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:50:04.822740 kernel: fuse: init (API version 7.39) Feb 13 20:50:04.822750 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:50:04.822760 kernel: ACPI: bus type drm_connector registered Feb 13 20:50:04.822770 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:50:04.822781 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:50:04.822791 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:50:04.822824 systemd-journald[1111]: Collecting audit messages is disabled. Feb 13 20:50:04.822847 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:50:04.822859 systemd-journald[1111]: Journal started Feb 13 20:50:04.822879 systemd-journald[1111]: Runtime Journal (/run/log/journal/3426b64876c147b5bea552fbebc6c98f) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:50:04.625681 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:50:04.648072 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:50:04.648443 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:50:04.824451 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:50:04.824490 systemd[1]: Stopped verity-setup.service. Feb 13 20:50:04.828519 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:50:04.829142 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:50:04.830327 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:50:04.831559 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:50:04.832694 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:50:04.833868 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:50:04.835070 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:50:04.838200 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:50:04.839648 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:50:04.841091 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:50:04.841256 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:50:04.842612 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:50:04.842744 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:50:04.844149 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:50:04.844315 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:50:04.845601 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:50:04.845729 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:50:04.848629 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:50:04.848777 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:50:04.850076 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:50:04.850264 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:50:04.851574 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:50:04.853120 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:50:04.854741 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:50:04.867353 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:50:04.882336 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:50:04.884524 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:50:04.885642 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:50:04.885682 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:50:04.887614 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:50:04.889843 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:50:04.891941 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:50:04.893037 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:50:04.894772 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:50:04.896843 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:50:04.898053 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:50:04.899790 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:50:04.905648 systemd-journald[1111]: Time spent on flushing to /var/log/journal/3426b64876c147b5bea552fbebc6c98f is 14.714ms for 850 entries. Feb 13 20:50:04.905648 systemd-journald[1111]: System Journal (/var/log/journal/3426b64876c147b5bea552fbebc6c98f) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:50:04.926976 systemd-journald[1111]: Received client request to flush runtime journal. Feb 13 20:50:04.906705 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:50:04.907859 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:50:04.912454 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:50:04.915428 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:50:04.917845 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:50:04.919461 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:50:04.920786 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:50:04.922366 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:50:04.923913 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:50:04.929667 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:50:04.931363 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:50:04.937502 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:50:04.938239 kernel: loop0: detected capacity change from 0 to 114432 Feb 13 20:50:04.940534 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:50:04.950849 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:50:04.956279 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:50:04.956896 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:50:04.959328 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:50:04.964713 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:50:04.972051 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:50:04.979350 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:50:04.989733 kernel: loop1: detected capacity change from 0 to 114328 Feb 13 20:50:04.995069 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Feb 13 20:50:04.995086 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Feb 13 20:50:04.998966 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:50:05.050742 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 20:50:05.080216 kernel: loop3: detected capacity change from 0 to 114432 Feb 13 20:50:05.084196 kernel: loop4: detected capacity change from 0 to 114328 Feb 13 20:50:05.088207 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 20:50:05.091630 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 20:50:05.092013 (sd-merge)[1180]: Merged extensions into '/usr'. Feb 13 20:50:05.096425 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:50:05.096439 systemd[1]: Reloading... Feb 13 20:50:05.154259 zram_generator::config[1205]: No configuration found. Feb 13 20:50:05.171458 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:50:05.247316 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:50:05.282643 systemd[1]: Reloading finished in 185 ms. Feb 13 20:50:05.311523 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:50:05.313000 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:50:05.326328 systemd[1]: Starting ensure-sysext.service... Feb 13 20:50:05.328172 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:50:05.335906 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:50:05.335921 systemd[1]: Reloading... Feb 13 20:50:05.344617 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:50:05.344873 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:50:05.345534 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:50:05.345756 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Feb 13 20:50:05.345813 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Feb 13 20:50:05.348171 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:50:05.348198 systemd-tmpfiles[1241]: Skipping /boot Feb 13 20:50:05.354929 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:50:05.354946 systemd-tmpfiles[1241]: Skipping /boot Feb 13 20:50:05.384205 zram_generator::config[1271]: No configuration found. Feb 13 20:50:05.456137 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:50:05.491692 systemd[1]: Reloading finished in 155 ms. Feb 13 20:50:05.505106 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:50:05.521710 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:50:05.529461 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:50:05.531882 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:50:05.534306 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:50:05.539526 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:50:05.542952 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:50:05.549201 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:50:05.556111 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:50:05.557539 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:50:05.563505 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:50:05.565889 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:50:05.567048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:50:05.569725 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:50:05.572106 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:50:05.574566 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:50:05.574705 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:50:05.576784 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:50:05.585401 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:50:05.587381 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:50:05.589209 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:50:05.590012 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Feb 13 20:50:05.593686 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:50:05.599853 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:50:05.605550 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:50:05.608415 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:50:05.611685 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:50:05.614734 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:50:05.616423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:50:05.621410 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:50:05.622992 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:50:05.626845 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:50:05.630615 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:50:05.632140 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:50:05.632293 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:50:05.639274 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:50:05.639446 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:50:05.645812 systemd[1]: Finished ensure-sysext.service. Feb 13 20:50:05.652454 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1345) Feb 13 20:50:05.655608 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 20:50:05.662027 augenrules[1352]: No rules Feb 13 20:50:05.668423 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:50:05.672359 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:50:05.674423 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:50:05.674854 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:50:05.677569 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:50:05.677707 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:50:05.681303 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:50:05.695801 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:50:05.695968 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:50:05.708072 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:50:05.708140 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:50:05.717006 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:50:05.726386 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:50:05.752002 systemd-networkd[1372]: lo: Link UP Feb 13 20:50:05.752008 systemd-networkd[1372]: lo: Gained carrier Feb 13 20:50:05.752845 systemd-networkd[1372]: Enumeration completed Feb 13 20:50:05.753886 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:50:05.753895 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:50:05.754579 systemd-networkd[1372]: eth0: Link UP Feb 13 20:50:05.754588 systemd-networkd[1372]: eth0: Gained carrier Feb 13 20:50:05.754602 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:50:05.754910 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:50:05.763399 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:50:05.764860 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:50:05.766134 systemd-resolved[1309]: Positive Trust Anchors: Feb 13 20:50:05.766149 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:50:05.766191 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:50:05.766452 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:50:05.769842 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:50:05.771250 systemd-networkd[1372]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:50:05.773893 systemd-timesyncd[1376]: Network configuration changed, trying to establish connection. Feb 13 20:50:05.775285 systemd-timesyncd[1376]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 20:50:05.775357 systemd-timesyncd[1376]: Initial clock synchronization to Thu 2025-02-13 20:50:05.401323 UTC. Feb 13 20:50:05.778676 systemd-resolved[1309]: Defaulting to hostname 'linux'. Feb 13 20:50:05.790383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:50:05.791703 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:50:05.793231 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:50:05.795268 systemd[1]: Reached target network.target - Network. Feb 13 20:50:05.796283 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:50:05.798631 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:50:05.814717 lvm[1395]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:50:05.828508 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:50:05.849142 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:50:05.851099 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:50:05.852385 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:50:05.853590 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:50:05.854904 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:50:05.856401 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:50:05.857632 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:50:05.858924 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:50:05.860432 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:50:05.860467 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:50:05.861408 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:50:05.863152 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:50:05.865615 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:50:05.875140 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:50:05.877320 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:50:05.878920 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:50:05.880157 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:50:05.881117 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:50:05.882326 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:50:05.882356 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:50:05.883213 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:50:05.885163 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:50:05.885169 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:50:05.889058 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:50:05.891322 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:50:05.892625 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:50:05.897413 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:50:05.899502 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:50:05.903949 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:50:05.908979 jq[1406]: false Feb 13 20:50:05.909343 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:50:05.916654 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:50:05.922764 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:50:05.923209 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:50:05.923430 extend-filesystems[1407]: Found loop3 Feb 13 20:50:05.924725 extend-filesystems[1407]: Found loop4 Feb 13 20:50:05.924725 extend-filesystems[1407]: Found loop5 Feb 13 20:50:05.924725 extend-filesystems[1407]: Found vda Feb 13 20:50:05.924725 extend-filesystems[1407]: Found vda1 Feb 13 20:50:05.924725 extend-filesystems[1407]: Found vda2 Feb 13 20:50:05.924725 extend-filesystems[1407]: Found vda3 Feb 13 20:50:05.924725 extend-filesystems[1407]: Found usr Feb 13 20:50:05.924725 extend-filesystems[1407]: Found vda4 Feb 13 20:50:05.924725 extend-filesystems[1407]: Found vda6 Feb 13 20:50:05.924725 extend-filesystems[1407]: Found vda7 Feb 13 20:50:05.924725 extend-filesystems[1407]: Found vda9 Feb 13 20:50:05.924725 extend-filesystems[1407]: Checking size of /dev/vda9 Feb 13 20:50:05.924334 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:50:05.936210 dbus-daemon[1405]: [system] SELinux support is enabled Feb 13 20:50:05.927019 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:50:05.961067 extend-filesystems[1407]: Resized partition /dev/vda9 Feb 13 20:50:05.933019 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:50:05.963735 jq[1423]: true Feb 13 20:50:05.963971 extend-filesystems[1430]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:50:05.935463 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:50:05.935619 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:50:05.935877 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:50:05.936003 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:50:05.938585 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:50:05.956453 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:50:05.956636 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:50:05.971210 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 20:50:05.984499 jq[1431]: true Feb 13 20:50:05.984996 (ntainerd)[1432]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:50:05.985434 systemd-logind[1417]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 20:50:05.989443 systemd-logind[1417]: New seat seat0. Feb 13 20:50:05.990216 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1350) Feb 13 20:50:05.991416 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:50:05.991484 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:50:05.998134 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:50:05.998160 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:50:06.004337 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:50:06.007334 tar[1427]: linux-arm64/helm Feb 13 20:50:06.019858 update_engine[1421]: I20250213 20:50:06.019659 1421 main.cc:92] Flatcar Update Engine starting Feb 13 20:50:06.022231 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:50:06.024802 update_engine[1421]: I20250213 20:50:06.024760 1421 update_check_scheduler.cc:74] Next update check in 11m1s Feb 13 20:50:06.028389 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:50:06.033005 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 20:50:06.070693 extend-filesystems[1430]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:50:06.070693 extend-filesystems[1430]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:50:06.070693 extend-filesystems[1430]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 20:50:06.075283 extend-filesystems[1407]: Resized filesystem in /dev/vda9 Feb 13 20:50:06.071735 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:50:06.071927 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:50:06.081486 bash[1459]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:50:06.084523 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:50:06.092093 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:50:06.105196 locksmithd[1458]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:50:06.219773 containerd[1432]: time="2025-02-13T20:50:06.219645715Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:50:06.243232 containerd[1432]: time="2025-02-13T20:50:06.243183031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:06.244532 containerd[1432]: time="2025-02-13T20:50:06.244496822Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:50:06.244532 containerd[1432]: time="2025-02-13T20:50:06.244529233Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:50:06.244620 containerd[1432]: time="2025-02-13T20:50:06.244550510Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:50:06.244753 containerd[1432]: time="2025-02-13T20:50:06.244700402Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:50:06.244753 containerd[1432]: time="2025-02-13T20:50:06.244724805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:06.244797 containerd[1432]: time="2025-02-13T20:50:06.244775519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:50:06.244797 containerd[1432]: time="2025-02-13T20:50:06.244787034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:06.244972 containerd[1432]: time="2025-02-13T20:50:06.244933609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:50:06.244972 containerd[1432]: time="2025-02-13T20:50:06.244951492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:06.245018 containerd[1432]: time="2025-02-13T20:50:06.244964571Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:50:06.245018 containerd[1432]: time="2025-02-13T20:50:06.245002358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:06.245088 containerd[1432]: time="2025-02-13T20:50:06.245069887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:06.245307 containerd[1432]: time="2025-02-13T20:50:06.245287270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:06.245448 containerd[1432]: time="2025-02-13T20:50:06.245428125Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:50:06.245448 containerd[1432]: time="2025-02-13T20:50:06.245446504Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:50:06.245535 containerd[1432]: time="2025-02-13T20:50:06.245520134Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:50:06.245574 containerd[1432]: time="2025-02-13T20:50:06.245562497Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:50:06.248433 containerd[1432]: time="2025-02-13T20:50:06.248407195Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:50:06.248491 containerd[1432]: time="2025-02-13T20:50:06.248452837Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:50:06.248491 containerd[1432]: time="2025-02-13T20:50:06.248468395Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:50:06.248541 containerd[1432]: time="2025-02-13T20:50:06.248491540Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:50:06.248541 containerd[1432]: time="2025-02-13T20:50:06.248506220Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:50:06.248646 containerd[1432]: time="2025-02-13T20:50:06.248621108Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:50:06.248859 containerd[1432]: time="2025-02-13T20:50:06.248842075Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:50:06.248953 containerd[1432]: time="2025-02-13T20:50:06.248935419Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:50:06.248976 containerd[1432]: time="2025-02-13T20:50:06.248955361Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:50:06.248976 containerd[1432]: time="2025-02-13T20:50:06.248969660Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:50:06.249018 containerd[1432]: time="2025-02-13T20:50:06.248982281Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:50:06.249018 containerd[1432]: time="2025-02-13T20:50:06.248996237Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:50:06.249018 containerd[1432]: time="2025-02-13T20:50:06.249008973Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:50:06.249066 containerd[1432]: time="2025-02-13T20:50:06.249027733Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:50:06.249066 containerd[1432]: time="2025-02-13T20:50:06.249041994Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:50:06.249066 containerd[1432]: time="2025-02-13T20:50:06.249054386Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:50:06.249116 containerd[1432]: time="2025-02-13T20:50:06.249066703Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:50:06.249116 containerd[1432]: time="2025-02-13T20:50:06.249079591Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:50:06.249116 containerd[1432]: time="2025-02-13T20:50:06.249099304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:50:06.249116 containerd[1432]: time="2025-02-13T20:50:06.249112307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:50:06.249215 containerd[1432]: time="2025-02-13T20:50:06.249136405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:50:06.249215 containerd[1432]: time="2025-02-13T20:50:06.249163592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:50:06.249257 containerd[1432]: time="2025-02-13T20:50:06.249212667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:50:06.249257 containerd[1432]: time="2025-02-13T20:50:06.249228338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:50:06.249257 containerd[1432]: time="2025-02-13T20:50:06.249239816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:50:06.249257 containerd[1432]: time="2025-02-13T20:50:06.249251636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:50:06.249327 containerd[1432]: time="2025-02-13T20:50:06.249269138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:50:06.249327 containerd[1432]: time="2025-02-13T20:50:06.249292627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:50:06.249327 containerd[1432]: time="2025-02-13T20:50:06.249304180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:50:06.249327 containerd[1432]: time="2025-02-13T20:50:06.249316191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:50:06.250008 containerd[1432]: time="2025-02-13T20:50:06.249506806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:50:06.250008 containerd[1432]: time="2025-02-13T20:50:06.249543335Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:50:06.250008 containerd[1432]: time="2025-02-13T20:50:06.249579369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:50:06.250008 containerd[1432]: time="2025-02-13T20:50:06.249599006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:50:06.250008 containerd[1432]: time="2025-02-13T20:50:06.249615402Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:50:06.251747 containerd[1432]: time="2025-02-13T20:50:06.250680849Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:50:06.251747 containerd[1432]: time="2025-02-13T20:50:06.250729999Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:50:06.251747 containerd[1432]: time="2025-02-13T20:50:06.250743955Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:50:06.251747 containerd[1432]: time="2025-02-13T20:50:06.250809501Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:50:06.251747 containerd[1432]: time="2025-02-13T20:50:06.250825135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:50:06.251747 containerd[1432]: time="2025-02-13T20:50:06.250843743Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:50:06.251747 containerd[1432]: time="2025-02-13T20:50:06.250855258Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:50:06.251747 containerd[1432]: time="2025-02-13T20:50:06.250869443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:50:06.252139 containerd[1432]: time="2025-02-13T20:50:06.252063656Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:50:06.252139 containerd[1432]: time="2025-02-13T20:50:06.252139879Z" level=info msg="Connect containerd service" Feb 13 20:50:06.252282 containerd[1432]: time="2025-02-13T20:50:06.252198677Z" level=info msg="using legacy CRI server" Feb 13 20:50:06.252282 containerd[1432]: time="2025-02-13T20:50:06.252212594Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:50:06.252318 containerd[1432]: time="2025-02-13T20:50:06.252301515Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:50:06.253419 containerd[1432]: time="2025-02-13T20:50:06.253383967Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:50:06.253961 containerd[1432]: time="2025-02-13T20:50:06.253821402Z" level=info msg="Start subscribing containerd event" Feb 13 20:50:06.253961 containerd[1432]: time="2025-02-13T20:50:06.253869142Z" level=info msg="Start recovering state" Feb 13 20:50:06.253961 containerd[1432]: time="2025-02-13T20:50:06.253925613Z" level=info msg="Start event monitor" Feb 13 20:50:06.254052 containerd[1432]: time="2025-02-13T20:50:06.253935451Z" level=info msg="Start snapshots syncer" Feb 13 20:50:06.254115 containerd[1432]: time="2025-02-13T20:50:06.254103149Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:50:06.254289 containerd[1432]: time="2025-02-13T20:50:06.254148944Z" level=info msg="Start streaming server" Feb 13 20:50:06.256197 containerd[1432]: time="2025-02-13T20:50:06.254276987Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:50:06.256197 containerd[1432]: time="2025-02-13T20:50:06.254465695Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:50:06.256197 containerd[1432]: time="2025-02-13T20:50:06.254517362Z" level=info msg="containerd successfully booted in 0.037625s" Feb 13 20:50:06.254716 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:50:06.372026 tar[1427]: linux-arm64/LICENSE Feb 13 20:50:06.372117 tar[1427]: linux-arm64/README.md Feb 13 20:50:06.384250 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:50:06.807768 sshd_keygen[1425]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:50:06.825708 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:50:06.840447 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:50:06.845694 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:50:06.845875 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:50:06.848482 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:50:06.861751 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:50:06.875488 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:50:06.877613 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 20:50:06.878835 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:50:06.962339 systemd-networkd[1372]: eth0: Gained IPv6LL Feb 13 20:50:06.964573 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:50:06.967273 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:50:06.975392 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 20:50:06.977636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:06.979714 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:50:06.993662 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 20:50:06.994807 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 20:50:06.997518 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:50:06.997878 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:50:07.453568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:07.455211 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:50:07.457792 (kubelet)[1519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:50:07.460573 systemd[1]: Startup finished in 547ms (kernel) + 4.573s (initrd) + 3.215s (userspace) = 8.337s. Feb 13 20:50:07.953116 kubelet[1519]: E0213 20:50:07.953064 1519 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:50:07.955639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:50:07.955790 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:50:12.589728 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:50:12.590784 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:54390.service - OpenSSH per-connection server daemon (10.0.0.1:54390). Feb 13 20:50:12.639420 sshd[1535]: Accepted publickey for core from 10.0.0.1 port 54390 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:50:12.640900 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:12.647992 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:50:12.657389 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:50:12.659132 systemd-logind[1417]: New session 1 of user core. Feb 13 20:50:12.666239 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:50:12.668549 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:50:12.674719 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:50:12.742563 systemd[1539]: Queued start job for default target default.target. Feb 13 20:50:12.749096 systemd[1539]: Created slice app.slice - User Application Slice. Feb 13 20:50:12.749139 systemd[1539]: Reached target paths.target - Paths. Feb 13 20:50:12.749151 systemd[1539]: Reached target timers.target - Timers. Feb 13 20:50:12.750316 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:50:12.759716 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:50:12.759764 systemd[1539]: Reached target sockets.target - Sockets. Feb 13 20:50:12.759775 systemd[1539]: Reached target basic.target - Basic System. Feb 13 20:50:12.759807 systemd[1539]: Reached target default.target - Main User Target. Feb 13 20:50:12.759830 systemd[1539]: Startup finished in 80ms. Feb 13 20:50:12.760128 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:50:12.761455 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:50:12.825336 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:54204.service - OpenSSH per-connection server daemon (10.0.0.1:54204). Feb 13 20:50:12.855594 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 54204 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:50:12.856661 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:12.861539 systemd-logind[1417]: New session 2 of user core. Feb 13 20:50:12.868325 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:50:12.917273 sshd[1550]: pam_unix(sshd:session): session closed for user core Feb 13 20:50:12.932472 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:54204.service: Deactivated successfully. Feb 13 20:50:12.933991 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:50:12.935257 systemd-logind[1417]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:50:12.936408 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:54208.service - OpenSSH per-connection server daemon (10.0.0.1:54208). Feb 13 20:50:12.937186 systemd-logind[1417]: Removed session 2. Feb 13 20:50:12.967608 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 54208 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:50:12.968748 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:12.972212 systemd-logind[1417]: New session 3 of user core. Feb 13 20:50:12.981346 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:50:13.027834 sshd[1557]: pam_unix(sshd:session): session closed for user core Feb 13 20:50:13.038235 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:54208.service: Deactivated successfully. Feb 13 20:50:13.039356 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:50:13.040352 systemd-logind[1417]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:50:13.041277 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:54218.service - OpenSSH per-connection server daemon (10.0.0.1:54218). Feb 13 20:50:13.042011 systemd-logind[1417]: Removed session 3. Feb 13 20:50:13.071846 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 54218 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:50:13.072935 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:13.076512 systemd-logind[1417]: New session 4 of user core. Feb 13 20:50:13.089365 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:50:13.138962 sshd[1564]: pam_unix(sshd:session): session closed for user core Feb 13 20:50:13.150327 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:54218.service: Deactivated successfully. Feb 13 20:50:13.151562 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:50:13.152044 systemd-logind[1417]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:50:13.162477 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:54222.service - OpenSSH per-connection server daemon (10.0.0.1:54222). Feb 13 20:50:13.163290 systemd-logind[1417]: Removed session 4. Feb 13 20:50:13.188657 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 54222 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:50:13.189690 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:13.193169 systemd-logind[1417]: New session 5 of user core. Feb 13 20:50:13.204356 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:50:13.263136 sudo[1574]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:50:13.263637 sudo[1574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:50:13.562377 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:50:13.562512 (dockerd)[1593]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:50:13.821694 dockerd[1593]: time="2025-02-13T20:50:13.821573446Z" level=info msg="Starting up" Feb 13 20:50:13.960432 dockerd[1593]: time="2025-02-13T20:50:13.960368968Z" level=info msg="Loading containers: start." Feb 13 20:50:14.043197 kernel: Initializing XFRM netlink socket Feb 13 20:50:14.101928 systemd-networkd[1372]: docker0: Link UP Feb 13 20:50:14.119319 dockerd[1593]: time="2025-02-13T20:50:14.119275874Z" level=info msg="Loading containers: done." Feb 13 20:50:14.135045 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck100965000-merged.mount: Deactivated successfully. Feb 13 20:50:14.136173 dockerd[1593]: time="2025-02-13T20:50:14.136118792Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:50:14.136264 dockerd[1593]: time="2025-02-13T20:50:14.136245091Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:50:14.136373 dockerd[1593]: time="2025-02-13T20:50:14.136345059Z" level=info msg="Daemon has completed initialization" Feb 13 20:50:14.164313 dockerd[1593]: time="2025-02-13T20:50:14.164141492Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:50:14.164334 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:50:14.946591 containerd[1432]: time="2025-02-13T20:50:14.946534966Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 20:50:15.841526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3209619732.mount: Deactivated successfully. Feb 13 20:50:18.022260 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:50:18.036604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:18.129160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:18.132675 (kubelet)[1811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:50:18.167969 containerd[1432]: time="2025-02-13T20:50:18.167914105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:18.168297 containerd[1432]: time="2025-02-13T20:50:18.168244049Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 20:50:18.169752 kubelet[1811]: E0213 20:50:18.169697 1811 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:50:18.169912 containerd[1432]: time="2025-02-13T20:50:18.169877329Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:18.173079 containerd[1432]: time="2025-02-13T20:50:18.173042255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:18.173314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:50:18.174022 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:50:18.174112 containerd[1432]: time="2025-02-13T20:50:18.174083441Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 3.227503939s" Feb 13 20:50:18.174145 containerd[1432]: time="2025-02-13T20:50:18.174117081Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 20:50:18.192186 containerd[1432]: time="2025-02-13T20:50:18.192151905Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 20:50:20.331164 containerd[1432]: time="2025-02-13T20:50:20.331114284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:20.332055 containerd[1432]: time="2025-02-13T20:50:20.331835846Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 20:50:20.332848 containerd[1432]: time="2025-02-13T20:50:20.332781183Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:20.336383 containerd[1432]: time="2025-02-13T20:50:20.336345311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:20.339076 containerd[1432]: time="2025-02-13T20:50:20.338579214Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 2.146380091s" Feb 13 20:50:20.339076 containerd[1432]: time="2025-02-13T20:50:20.338621150Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 20:50:20.356977 containerd[1432]: time="2025-02-13T20:50:20.356933180Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 20:50:21.772145 containerd[1432]: time="2025-02-13T20:50:21.772085617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:21.772713 containerd[1432]: time="2025-02-13T20:50:21.772685013Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 20:50:21.773471 containerd[1432]: time="2025-02-13T20:50:21.773444991Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:21.776333 containerd[1432]: time="2025-02-13T20:50:21.776305120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:21.777523 containerd[1432]: time="2025-02-13T20:50:21.777435785Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.420461971s" Feb 13 20:50:21.777523 containerd[1432]: time="2025-02-13T20:50:21.777476010Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 20:50:21.795901 containerd[1432]: time="2025-02-13T20:50:21.795849968Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:50:22.761705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount795834756.mount: Deactivated successfully. Feb 13 20:50:22.956624 containerd[1432]: time="2025-02-13T20:50:22.956573975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:22.957684 containerd[1432]: time="2025-02-13T20:50:22.957632343Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 20:50:22.958723 containerd[1432]: time="2025-02-13T20:50:22.958694649Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:22.961283 containerd[1432]: time="2025-02-13T20:50:22.961240762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:22.961700 containerd[1432]: time="2025-02-13T20:50:22.961662303Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.165770671s" Feb 13 20:50:22.961736 containerd[1432]: time="2025-02-13T20:50:22.961697906Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 20:50:22.980816 containerd[1432]: time="2025-02-13T20:50:22.980766943Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:50:23.751460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2402628197.mount: Deactivated successfully. Feb 13 20:50:24.840260 containerd[1432]: time="2025-02-13T20:50:24.840172677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:24.840774 containerd[1432]: time="2025-02-13T20:50:24.840728958Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 20:50:24.841679 containerd[1432]: time="2025-02-13T20:50:24.841648536Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:24.844781 containerd[1432]: time="2025-02-13T20:50:24.844744402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:24.846326 containerd[1432]: time="2025-02-13T20:50:24.846286221Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.865476026s" Feb 13 20:50:24.846368 containerd[1432]: time="2025-02-13T20:50:24.846327924Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 20:50:24.864525 containerd[1432]: time="2025-02-13T20:50:24.864476051Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 20:50:25.335374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4074092517.mount: Deactivated successfully. Feb 13 20:50:25.338745 containerd[1432]: time="2025-02-13T20:50:25.338693929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:25.340127 containerd[1432]: time="2025-02-13T20:50:25.340078671Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 20:50:25.341117 containerd[1432]: time="2025-02-13T20:50:25.341048393Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:25.342967 containerd[1432]: time="2025-02-13T20:50:25.342908771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:25.343895 containerd[1432]: time="2025-02-13T20:50:25.343848285Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 479.33666ms" Feb 13 20:50:25.343895 containerd[1432]: time="2025-02-13T20:50:25.343881881Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 20:50:25.362021 containerd[1432]: time="2025-02-13T20:50:25.361989219Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 20:50:26.038131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount641488753.mount: Deactivated successfully. Feb 13 20:50:28.272393 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:50:28.292130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:28.392957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:28.396392 (kubelet)[1970]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:50:28.434564 kubelet[1970]: E0213 20:50:28.434513 1970 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:50:28.437670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:50:28.437916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:50:29.068458 containerd[1432]: time="2025-02-13T20:50:29.068409909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:29.069482 containerd[1432]: time="2025-02-13T20:50:29.069435283Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 20:50:29.070642 containerd[1432]: time="2025-02-13T20:50:29.070120275Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:29.076415 containerd[1432]: time="2025-02-13T20:50:29.076381559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:29.077650 containerd[1432]: time="2025-02-13T20:50:29.077609891Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.715584195s" Feb 13 20:50:29.077693 containerd[1432]: time="2025-02-13T20:50:29.077648208Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 20:50:34.234825 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:34.248377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:34.263716 systemd[1]: Reloading requested from client PID 2063 ('systemctl') (unit session-5.scope)... Feb 13 20:50:34.263734 systemd[1]: Reloading... Feb 13 20:50:34.333390 zram_generator::config[2105]: No configuration found. Feb 13 20:50:34.436514 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:50:34.488170 systemd[1]: Reloading finished in 224 ms. Feb 13 20:50:34.520578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:34.521881 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:34.524473 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:50:34.524657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:34.525997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:34.618158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:34.622208 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:50:34.662069 kubelet[2149]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:50:34.662069 kubelet[2149]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:50:34.662069 kubelet[2149]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:50:34.663028 kubelet[2149]: I0213 20:50:34.662968 2149 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:50:35.572985 kubelet[2149]: I0213 20:50:35.572920 2149 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:50:35.572985 kubelet[2149]: I0213 20:50:35.572951 2149 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:50:35.573164 kubelet[2149]: I0213 20:50:35.573145 2149 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:50:35.603577 kubelet[2149]: E0213 20:50:35.603535 2149 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:50:35.603677 kubelet[2149]: I0213 20:50:35.603608 2149 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:50:35.613213 kubelet[2149]: I0213 20:50:35.613147 2149 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:50:35.613535 kubelet[2149]: I0213 20:50:35.613482 2149 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:50:35.613689 kubelet[2149]: I0213 20:50:35.613521 2149 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:50:35.613767 kubelet[2149]: I0213 20:50:35.613744 2149 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:50:35.613767 kubelet[2149]: I0213 20:50:35.613756 2149 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:50:35.614035 kubelet[2149]: I0213 20:50:35.614006 2149 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:50:35.614995 kubelet[2149]: I0213 20:50:35.614950 2149 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:50:35.614995 kubelet[2149]: I0213 20:50:35.614971 2149 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:50:35.615108 kubelet[2149]: I0213 20:50:35.615098 2149 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:50:35.615848 kubelet[2149]: I0213 20:50:35.615203 2149 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:50:35.616459 kubelet[2149]: W0213 20:50:35.616385 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:50:35.616459 kubelet[2149]: E0213 20:50:35.616454 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:50:35.616684 kubelet[2149]: W0213 20:50:35.616500 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:50:35.616684 kubelet[2149]: E0213 20:50:35.616533 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:50:35.616802 kubelet[2149]: I0213 20:50:35.616703 2149 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:50:35.617058 kubelet[2149]: I0213 20:50:35.617039 2149 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:50:35.618792 kubelet[2149]: W0213 20:50:35.617142 2149 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:50:35.619352 kubelet[2149]: I0213 20:50:35.619326 2149 server.go:1264] "Started kubelet" Feb 13 20:50:35.619594 kubelet[2149]: I0213 20:50:35.619443 2149 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:50:35.619764 kubelet[2149]: I0213 20:50:35.619720 2149 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:50:35.619984 kubelet[2149]: I0213 20:50:35.619955 2149 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:50:35.623209 kubelet[2149]: E0213 20:50:35.622259 2149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dfb1c00b0668 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:50:35.619296872 +0000 UTC m=+0.994126736,LastTimestamp:2025-02-13 20:50:35.619296872 +0000 UTC m=+0.994126736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:50:35.623209 kubelet[2149]: I0213 20:50:35.622815 2149 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:50:35.624018 kubelet[2149]: I0213 20:50:35.623884 2149 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:50:35.632602 kubelet[2149]: I0213 20:50:35.629885 2149 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:50:35.635843 kubelet[2149]: E0213 20:50:35.632842 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Feb 13 20:50:35.635843 kubelet[2149]: I0213 20:50:35.633310 2149 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:50:35.635843 kubelet[2149]: I0213 20:50:35.633378 2149 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:50:35.635843 kubelet[2149]: I0213 20:50:35.634863 2149 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:50:35.636395 kubelet[2149]: I0213 20:50:35.636362 2149 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:50:35.636550 kubelet[2149]: W0213 20:50:35.636510 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:50:35.636623 kubelet[2149]: E0213 20:50:35.636612 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:50:35.637208 kubelet[2149]: E0213 20:50:35.637057 2149 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:50:35.637477 kubelet[2149]: I0213 20:50:35.637450 2149 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:50:35.645220 kubelet[2149]: I0213 20:50:35.645171 2149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:50:35.646067 kubelet[2149]: I0213 20:50:35.646038 2149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:50:35.646210 kubelet[2149]: I0213 20:50:35.646195 2149 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:50:35.646238 kubelet[2149]: I0213 20:50:35.646216 2149 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:50:35.646275 kubelet[2149]: E0213 20:50:35.646258 2149 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:50:35.649838 kubelet[2149]: W0213 20:50:35.649805 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:50:35.649894 kubelet[2149]: E0213 20:50:35.649844 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:50:35.650674 kubelet[2149]: I0213 20:50:35.650656 2149 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:50:35.650674 kubelet[2149]: I0213 20:50:35.650673 2149 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:50:35.650765 kubelet[2149]: I0213 20:50:35.650692 2149 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:50:35.709663 kubelet[2149]: I0213 20:50:35.709623 2149 policy_none.go:49] "None policy: Start" Feb 13 20:50:35.710520 kubelet[2149]: I0213 20:50:35.710497 2149 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:50:35.710584 kubelet[2149]: I0213 20:50:35.710529 2149 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:50:35.717552 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:50:35.731065 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:50:35.731937 kubelet[2149]: I0213 20:50:35.731896 2149 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:50:35.732381 kubelet[2149]: E0213 20:50:35.732273 2149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Feb 13 20:50:35.733900 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:50:35.744857 kubelet[2149]: I0213 20:50:35.744833 2149 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:50:35.745247 kubelet[2149]: I0213 20:50:35.745010 2149 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:50:35.745247 kubelet[2149]: I0213 20:50:35.745109 2149 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:50:35.746378 kubelet[2149]: I0213 20:50:35.746333 2149 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 20:50:35.746548 kubelet[2149]: E0213 20:50:35.746527 2149 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 20:50:35.747196 kubelet[2149]: I0213 20:50:35.747084 2149 topology_manager.go:215] "Topology Admit Handler" podUID="df643e9b009b520c3223266c4038dc2a" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 20:50:35.747815 kubelet[2149]: I0213 20:50:35.747773 2149 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 20:50:35.753509 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 20:50:35.769847 systemd[1]: Created slice kubepods-burstable-poddf643e9b009b520c3223266c4038dc2a.slice - libcontainer container kubepods-burstable-poddf643e9b009b520c3223266c4038dc2a.slice. Feb 13 20:50:35.781632 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 20:50:35.833888 kubelet[2149]: E0213 20:50:35.833774 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Feb 13 20:50:35.837030 kubelet[2149]: I0213 20:50:35.836995 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df643e9b009b520c3223266c4038dc2a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"df643e9b009b520c3223266c4038dc2a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:35.837098 kubelet[2149]: I0213 20:50:35.837034 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df643e9b009b520c3223266c4038dc2a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"df643e9b009b520c3223266c4038dc2a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:35.837098 kubelet[2149]: I0213 20:50:35.837054 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df643e9b009b520c3223266c4038dc2a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"df643e9b009b520c3223266c4038dc2a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:35.837098 kubelet[2149]: I0213 20:50:35.837076 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:35.837098 kubelet[2149]: I0213 20:50:35.837092 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:35.837226 kubelet[2149]: I0213 20:50:35.837106 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:50:35.837226 kubelet[2149]: I0213 20:50:35.837122 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:35.837226 kubelet[2149]: I0213 20:50:35.837136 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:35.837226 kubelet[2149]: I0213 20:50:35.837153 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:35.934235 kubelet[2149]: I0213 20:50:35.934210 2149 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:50:35.934588 kubelet[2149]: E0213 20:50:35.934553 2149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Feb 13 20:50:36.065357 kubelet[2149]: E0213 20:50:36.065321 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:36.065928 containerd[1432]: time="2025-02-13T20:50:36.065883712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 20:50:36.072185 kubelet[2149]: E0213 20:50:36.072147 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:36.072520 containerd[1432]: time="2025-02-13T20:50:36.072479375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:df643e9b009b520c3223266c4038dc2a,Namespace:kube-system,Attempt:0,}" Feb 13 20:50:36.083708 kubelet[2149]: E0213 20:50:36.083674 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:36.087318 containerd[1432]: time="2025-02-13T20:50:36.087246160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 20:50:36.234805 kubelet[2149]: E0213 20:50:36.234752 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Feb 13 20:50:36.336188 kubelet[2149]: I0213 20:50:36.336152 2149 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:50:36.336501 kubelet[2149]: E0213 20:50:36.336474 2149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Feb 13 20:50:36.355205 kubelet[2149]: E0213 20:50:36.355029 2149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dfb1c00b0668 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:50:35.619296872 +0000 UTC m=+0.994126736,LastTimestamp:2025-02-13 20:50:35.619296872 +0000 UTC m=+0.994126736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:50:36.555446 kubelet[2149]: W0213 20:50:36.555371 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:50:36.555446 kubelet[2149]: E0213 20:50:36.555435 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:50:36.643861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2277196620.mount: Deactivated successfully. Feb 13 20:50:36.648814 containerd[1432]: time="2025-02-13T20:50:36.648758605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:50:36.649626 containerd[1432]: time="2025-02-13T20:50:36.649601647Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:50:36.650296 containerd[1432]: time="2025-02-13T20:50:36.650251294Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:50:36.651030 containerd[1432]: time="2025-02-13T20:50:36.650983670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 20:50:36.651676 containerd[1432]: time="2025-02-13T20:50:36.651613653Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:50:36.652975 containerd[1432]: time="2025-02-13T20:50:36.652889407Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:50:36.652975 containerd[1432]: time="2025-02-13T20:50:36.652958908Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:50:36.655877 containerd[1432]: time="2025-02-13T20:50:36.655833300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:50:36.658025 containerd[1432]: time="2025-02-13T20:50:36.657981391Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 592.014189ms" Feb 13 20:50:36.658589 containerd[1432]: time="2025-02-13T20:50:36.658553224Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 571.250113ms" Feb 13 20:50:36.659798 containerd[1432]: time="2025-02-13T20:50:36.659770707Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 587.231183ms" Feb 13 20:50:36.784257 containerd[1432]: time="2025-02-13T20:50:36.784153100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:50:36.784257 containerd[1432]: time="2025-02-13T20:50:36.784266803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:50:36.784582 containerd[1432]: time="2025-02-13T20:50:36.784331708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:50:36.784582 containerd[1432]: time="2025-02-13T20:50:36.784358965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:36.784582 containerd[1432]: time="2025-02-13T20:50:36.784470590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:36.784732 containerd[1432]: time="2025-02-13T20:50:36.784527182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:50:36.784732 containerd[1432]: time="2025-02-13T20:50:36.784542329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:36.784732 containerd[1432]: time="2025-02-13T20:50:36.784601438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:36.785196 containerd[1432]: time="2025-02-13T20:50:36.785046140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:50:36.785196 containerd[1432]: time="2025-02-13T20:50:36.785127830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:50:36.785351 containerd[1432]: time="2025-02-13T20:50:36.785173152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:36.785608 containerd[1432]: time="2025-02-13T20:50:36.785559223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:36.808336 systemd[1]: Started cri-containerd-2d5759da15e6533faea60f52291ca0d1a86cae4d5f540e77f7ea29a423832faf.scope - libcontainer container 2d5759da15e6533faea60f52291ca0d1a86cae4d5f540e77f7ea29a423832faf. Feb 13 20:50:36.812657 systemd[1]: Started cri-containerd-5ff61b037f0464c0d84f98caadfeda588c589126b0c108c7287e01fe24c95ffb.scope - libcontainer container 5ff61b037f0464c0d84f98caadfeda588c589126b0c108c7287e01fe24c95ffb. Feb 13 20:50:36.813637 systemd[1]: Started cri-containerd-7f6617a496ed5e892260aee1c57824b244d66b56ec983eafc06ba214357f5b19.scope - libcontainer container 7f6617a496ed5e892260aee1c57824b244d66b56ec983eafc06ba214357f5b19. Feb 13 20:50:36.836292 containerd[1432]: time="2025-02-13T20:50:36.835859866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d5759da15e6533faea60f52291ca0d1a86cae4d5f540e77f7ea29a423832faf\"" Feb 13 20:50:36.839231 kubelet[2149]: E0213 20:50:36.839052 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:36.843903 containerd[1432]: time="2025-02-13T20:50:36.843767412Z" level=info msg="CreateContainer within sandbox \"2d5759da15e6533faea60f52291ca0d1a86cae4d5f540e77f7ea29a423832faf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:50:36.847919 containerd[1432]: time="2025-02-13T20:50:36.847870637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:df643e9b009b520c3223266c4038dc2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f6617a496ed5e892260aee1c57824b244d66b56ec983eafc06ba214357f5b19\"" Feb 13 20:50:36.848650 kubelet[2149]: E0213 20:50:36.848620 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:36.850842 containerd[1432]: time="2025-02-13T20:50:36.850785115Z" level=info msg="CreateContainer within sandbox \"7f6617a496ed5e892260aee1c57824b244d66b56ec983eafc06ba214357f5b19\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:50:36.851958 containerd[1432]: time="2025-02-13T20:50:36.851923506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ff61b037f0464c0d84f98caadfeda588c589126b0c108c7287e01fe24c95ffb\"" Feb 13 20:50:36.852772 kubelet[2149]: E0213 20:50:36.852752 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:36.855033 containerd[1432]: time="2025-02-13T20:50:36.854992852Z" level=info msg="CreateContainer within sandbox \"5ff61b037f0464c0d84f98caadfeda588c589126b0c108c7287e01fe24c95ffb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:50:36.859706 containerd[1432]: time="2025-02-13T20:50:36.859671427Z" level=info msg="CreateContainer within sandbox \"2d5759da15e6533faea60f52291ca0d1a86cae4d5f540e77f7ea29a423832faf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1e58d1f9d560781aca378a8915533228fee2119a0b3b5fce545e26ec66943e44\"" Feb 13 20:50:36.860309 containerd[1432]: time="2025-02-13T20:50:36.860272915Z" level=info msg="StartContainer for \"1e58d1f9d560781aca378a8915533228fee2119a0b3b5fce545e26ec66943e44\"" Feb 13 20:50:36.871157 containerd[1432]: time="2025-02-13T20:50:36.871123235Z" level=info msg="CreateContainer within sandbox \"7f6617a496ed5e892260aee1c57824b244d66b56ec983eafc06ba214357f5b19\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7f37e2ab76f18006907ced80605350d754ae2127d02040703188e71437ac1075\"" Feb 13 20:50:36.871770 containerd[1432]: time="2025-02-13T20:50:36.871746024Z" level=info msg="StartContainer for \"7f37e2ab76f18006907ced80605350d754ae2127d02040703188e71437ac1075\"" Feb 13 20:50:36.875909 containerd[1432]: time="2025-02-13T20:50:36.875853207Z" level=info msg="CreateContainer within sandbox \"5ff61b037f0464c0d84f98caadfeda588c589126b0c108c7287e01fe24c95ffb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"30ccc470220dd5cf8a5d380ec9c3b3bd4b89d4d283973cac1060287bc99a9275\"" Feb 13 20:50:36.876297 containerd[1432]: time="2025-02-13T20:50:36.876270571Z" level=info msg="StartContainer for \"30ccc470220dd5cf8a5d380ec9c3b3bd4b89d4d283973cac1060287bc99a9275\"" Feb 13 20:50:36.885438 systemd[1]: Started cri-containerd-1e58d1f9d560781aca378a8915533228fee2119a0b3b5fce545e26ec66943e44.scope - libcontainer container 1e58d1f9d560781aca378a8915533228fee2119a0b3b5fce545e26ec66943e44. Feb 13 20:50:36.903416 systemd[1]: Started cri-containerd-30ccc470220dd5cf8a5d380ec9c3b3bd4b89d4d283973cac1060287bc99a9275.scope - libcontainer container 30ccc470220dd5cf8a5d380ec9c3b3bd4b89d4d283973cac1060287bc99a9275. Feb 13 20:50:36.904882 systemd[1]: Started cri-containerd-7f37e2ab76f18006907ced80605350d754ae2127d02040703188e71437ac1075.scope - libcontainer container 7f37e2ab76f18006907ced80605350d754ae2127d02040703188e71437ac1075. Feb 13 20:50:36.955742 kubelet[2149]: W0213 20:50:36.955642 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:50:36.955742 kubelet[2149]: E0213 20:50:36.955717 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:50:36.960916 containerd[1432]: time="2025-02-13T20:50:36.960878158Z" level=info msg="StartContainer for \"1e58d1f9d560781aca378a8915533228fee2119a0b3b5fce545e26ec66943e44\" returns successfully" Feb 13 20:50:36.961061 containerd[1432]: time="2025-02-13T20:50:36.961040979Z" level=info msg="StartContainer for \"30ccc470220dd5cf8a5d380ec9c3b3bd4b89d4d283973cac1060287bc99a9275\" returns successfully" Feb 13 20:50:36.961125 containerd[1432]: time="2025-02-13T20:50:36.961072512Z" level=info msg="StartContainer for \"7f37e2ab76f18006907ced80605350d754ae2127d02040703188e71437ac1075\" returns successfully" Feb 13 20:50:37.035339 kubelet[2149]: E0213 20:50:37.035294 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="1.6s" Feb 13 20:50:37.100240 kubelet[2149]: W0213 20:50:37.100125 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:50:37.100240 kubelet[2149]: E0213 20:50:37.100217 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:50:37.138968 kubelet[2149]: I0213 20:50:37.138690 2149 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:50:37.139208 kubelet[2149]: E0213 20:50:37.139150 2149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Feb 13 20:50:37.659378 kubelet[2149]: E0213 20:50:37.659303 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:37.662116 kubelet[2149]: E0213 20:50:37.661977 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:37.663073 kubelet[2149]: E0213 20:50:37.663018 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:38.665538 kubelet[2149]: E0213 20:50:38.665501 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:38.743654 kubelet[2149]: I0213 20:50:38.743616 2149 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:50:38.947214 kubelet[2149]: E0213 20:50:38.947089 2149 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 20:50:39.029808 kubelet[2149]: I0213 20:50:39.029770 2149 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 20:50:39.040059 kubelet[2149]: E0213 20:50:39.039997 2149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:50:39.141341 kubelet[2149]: E0213 20:50:39.141296 2149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:50:39.242150 kubelet[2149]: E0213 20:50:39.241861 2149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:50:39.617650 kubelet[2149]: I0213 20:50:39.617610 2149 apiserver.go:52] "Watching apiserver" Feb 13 20:50:39.635779 kubelet[2149]: I0213 20:50:39.635733 2149 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:50:39.673463 kubelet[2149]: E0213 20:50:39.673412 2149 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:39.673874 kubelet[2149]: E0213 20:50:39.673859 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:41.075481 systemd[1]: Reloading requested from client PID 2424 ('systemctl') (unit session-5.scope)... Feb 13 20:50:41.075502 systemd[1]: Reloading... Feb 13 20:50:41.138225 zram_generator::config[2469]: No configuration found. Feb 13 20:50:41.262861 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:50:41.326484 systemd[1]: Reloading finished in 250 ms. Feb 13 20:50:41.361792 kubelet[2149]: I0213 20:50:41.361660 2149 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:50:41.361820 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:41.377507 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:50:41.377764 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:41.377805 systemd[1]: kubelet.service: Consumed 1.387s CPU time, 112.9M memory peak, 0B memory swap peak. Feb 13 20:50:41.388534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:41.474566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:41.479250 (kubelet)[2505]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:50:41.520493 kubelet[2505]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:50:41.520493 kubelet[2505]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:50:41.520493 kubelet[2505]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:50:41.520840 kubelet[2505]: I0213 20:50:41.520542 2505 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:50:41.525892 kubelet[2505]: I0213 20:50:41.525854 2505 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:50:41.525892 kubelet[2505]: I0213 20:50:41.525884 2505 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:50:41.526064 kubelet[2505]: I0213 20:50:41.526049 2505 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:50:41.527469 kubelet[2505]: I0213 20:50:41.527446 2505 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:50:41.528650 kubelet[2505]: I0213 20:50:41.528573 2505 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:50:41.533274 kubelet[2505]: I0213 20:50:41.533248 2505 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:50:41.533475 kubelet[2505]: I0213 20:50:41.533438 2505 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:50:41.533643 kubelet[2505]: I0213 20:50:41.533471 2505 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:50:41.533643 kubelet[2505]: I0213 20:50:41.533643 2505 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:50:41.533745 kubelet[2505]: I0213 20:50:41.533653 2505 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:50:41.533745 kubelet[2505]: I0213 20:50:41.533687 2505 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:50:41.533801 kubelet[2505]: I0213 20:50:41.533791 2505 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:50:41.533826 kubelet[2505]: I0213 20:50:41.533805 2505 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:50:41.533849 kubelet[2505]: I0213 20:50:41.533831 2505 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:50:41.533868 kubelet[2505]: I0213 20:50:41.533849 2505 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:50:41.536202 kubelet[2505]: I0213 20:50:41.534677 2505 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:50:41.536202 kubelet[2505]: I0213 20:50:41.534869 2505 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:50:41.538258 kubelet[2505]: I0213 20:50:41.538234 2505 server.go:1264] "Started kubelet" Feb 13 20:50:41.539053 kubelet[2505]: I0213 20:50:41.539008 2505 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:50:41.540367 kubelet[2505]: I0213 20:50:41.540318 2505 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:50:41.540659 kubelet[2505]: I0213 20:50:41.540642 2505 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:50:41.542099 kubelet[2505]: I0213 20:50:41.542062 2505 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:50:41.544008 kubelet[2505]: I0213 20:50:41.543976 2505 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:50:41.546137 kubelet[2505]: I0213 20:50:41.546115 2505 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:50:41.548239 kubelet[2505]: I0213 20:50:41.546258 2505 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:50:41.548386 kubelet[2505]: I0213 20:50:41.548372 2505 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:50:41.552917 kubelet[2505]: I0213 20:50:41.552879 2505 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:50:41.552984 kubelet[2505]: I0213 20:50:41.552966 2505 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:50:41.553766 kubelet[2505]: E0213 20:50:41.553685 2505 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:50:41.553860 kubelet[2505]: I0213 20:50:41.553817 2505 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:50:41.561061 kubelet[2505]: I0213 20:50:41.561023 2505 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:50:41.562296 kubelet[2505]: I0213 20:50:41.562270 2505 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:50:41.562331 kubelet[2505]: I0213 20:50:41.562308 2505 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:50:41.562331 kubelet[2505]: I0213 20:50:41.562325 2505 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:50:41.562389 kubelet[2505]: E0213 20:50:41.562367 2505 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:50:41.590893 kubelet[2505]: I0213 20:50:41.590789 2505 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:50:41.590893 kubelet[2505]: I0213 20:50:41.590810 2505 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:50:41.590893 kubelet[2505]: I0213 20:50:41.590830 2505 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:50:41.591017 kubelet[2505]: I0213 20:50:41.590978 2505 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:50:41.591017 kubelet[2505]: I0213 20:50:41.590989 2505 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:50:41.591017 kubelet[2505]: I0213 20:50:41.591005 2505 policy_none.go:49] "None policy: Start" Feb 13 20:50:41.591561 kubelet[2505]: I0213 20:50:41.591541 2505 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:50:41.591561 kubelet[2505]: I0213 20:50:41.591565 2505 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:50:41.591699 kubelet[2505]: I0213 20:50:41.591682 2505 state_mem.go:75] "Updated machine memory state" Feb 13 20:50:41.596007 kubelet[2505]: I0213 20:50:41.595966 2505 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:50:41.596327 kubelet[2505]: I0213 20:50:41.596292 2505 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:50:41.597200 kubelet[2505]: I0213 20:50:41.597137 2505 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:50:41.649944 kubelet[2505]: I0213 20:50:41.649913 2505 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:50:41.655436 kubelet[2505]: I0213 20:50:41.655403 2505 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 20:50:41.655532 kubelet[2505]: I0213 20:50:41.655517 2505 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 20:50:41.662688 kubelet[2505]: I0213 20:50:41.662645 2505 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 20:50:41.662807 kubelet[2505]: I0213 20:50:41.662791 2505 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 20:50:41.662839 kubelet[2505]: I0213 20:50:41.662830 2505 topology_manager.go:215] "Topology Admit Handler" podUID="df643e9b009b520c3223266c4038dc2a" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 20:50:41.850428 kubelet[2505]: I0213 20:50:41.850301 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:41.850428 kubelet[2505]: I0213 20:50:41.850402 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df643e9b009b520c3223266c4038dc2a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"df643e9b009b520c3223266c4038dc2a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:41.850428 kubelet[2505]: I0213 20:50:41.850423 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df643e9b009b520c3223266c4038dc2a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"df643e9b009b520c3223266c4038dc2a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:41.850586 kubelet[2505]: I0213 20:50:41.850441 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:41.850586 kubelet[2505]: I0213 20:50:41.850470 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:41.850586 kubelet[2505]: I0213 20:50:41.850487 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:41.850586 kubelet[2505]: I0213 20:50:41.850502 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:41.850586 kubelet[2505]: I0213 20:50:41.850528 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:50:41.850852 kubelet[2505]: I0213 20:50:41.850570 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df643e9b009b520c3223266c4038dc2a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"df643e9b009b520c3223266c4038dc2a\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:41.971293 kubelet[2505]: E0213 20:50:41.971215 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:41.972017 kubelet[2505]: E0213 20:50:41.971445 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:41.972017 kubelet[2505]: E0213 20:50:41.971502 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:42.534274 kubelet[2505]: I0213 20:50:42.534220 2505 apiserver.go:52] "Watching apiserver" Feb 13 20:50:42.548987 kubelet[2505]: I0213 20:50:42.548954 2505 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:50:42.578264 kubelet[2505]: E0213 20:50:42.578225 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:42.578738 kubelet[2505]: E0213 20:50:42.578387 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:42.579142 kubelet[2505]: E0213 20:50:42.579122 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:42.631480 kubelet[2505]: I0213 20:50:42.631406 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.631388366 podStartE2EDuration="1.631388366s" podCreationTimestamp="2025-02-13 20:50:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:50:42.624208769 +0000 UTC m=+1.141830366" watchObservedRunningTime="2025-02-13 20:50:42.631388366 +0000 UTC m=+1.149009963" Feb 13 20:50:42.631760 kubelet[2505]: I0213 20:50:42.631512 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.631505881 podStartE2EDuration="1.631505881s" podCreationTimestamp="2025-02-13 20:50:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:50:42.631497238 +0000 UTC m=+1.149118835" watchObservedRunningTime="2025-02-13 20:50:42.631505881 +0000 UTC m=+1.149127478" Feb 13 20:50:42.641677 kubelet[2505]: I0213 20:50:42.641610 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.641579171 podStartE2EDuration="1.641579171s" podCreationTimestamp="2025-02-13 20:50:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:50:42.641331978 +0000 UTC m=+1.158953575" watchObservedRunningTime="2025-02-13 20:50:42.641579171 +0000 UTC m=+1.159200808" Feb 13 20:50:42.874037 sudo[1574]: pam_unix(sudo:session): session closed for user root Feb 13 20:50:42.876752 sshd[1571]: pam_unix(sshd:session): session closed for user core Feb 13 20:50:42.882234 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:54222.service: Deactivated successfully. Feb 13 20:50:42.883994 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:50:42.884769 systemd[1]: session-5.scope: Consumed 6.694s CPU time, 189.9M memory peak, 0B memory swap peak. Feb 13 20:50:42.885426 systemd-logind[1417]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:50:42.886286 systemd-logind[1417]: Removed session 5. Feb 13 20:50:43.579616 kubelet[2505]: E0213 20:50:43.579274 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:46.925702 kubelet[2505]: E0213 20:50:46.925384 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:47.584838 kubelet[2505]: E0213 20:50:47.584804 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:49.999357 kubelet[2505]: E0213 20:50:49.999322 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:50.589304 kubelet[2505]: E0213 20:50:50.589273 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:51.001990 kubelet[2505]: E0213 20:50:51.001875 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:51.132239 update_engine[1421]: I20250213 20:50:51.131651 1421 update_attempter.cc:509] Updating boot flags... Feb 13 20:50:51.156341 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2582) Feb 13 20:50:51.196198 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2580) Feb 13 20:50:51.224351 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2580) Feb 13 20:50:51.592204 kubelet[2505]: E0213 20:50:51.591016 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:52.593272 kubelet[2505]: E0213 20:50:52.592781 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:55.157926 kubelet[2505]: I0213 20:50:55.157874 2505 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:50:55.159532 kubelet[2505]: I0213 20:50:55.159087 2505 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:50:55.159594 containerd[1432]: time="2025-02-13T20:50:55.158883729Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:50:55.879222 kubelet[2505]: I0213 20:50:55.877001 2505 topology_manager.go:215] "Topology Admit Handler" podUID="d586322b-cbae-4419-b358-96eba1e571e2" podNamespace="kube-system" podName="kube-proxy-fbjf6" Feb 13 20:50:55.880408 kubelet[2505]: W0213 20:50:55.880064 2505 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 13 20:50:55.880408 kubelet[2505]: E0213 20:50:55.880109 2505 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 13 20:50:55.880408 kubelet[2505]: W0213 20:50:55.880161 2505 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 13 20:50:55.880408 kubelet[2505]: E0213 20:50:55.880172 2505 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 13 20:50:55.880789 kubelet[2505]: I0213 20:50:55.880742 2505 topology_manager.go:215] "Topology Admit Handler" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" podNamespace="kube-flannel" podName="kube-flannel-ds-vgntp" Feb 13 20:50:55.890814 systemd[1]: Created slice kubepods-besteffort-podd586322b_cbae_4419_b358_96eba1e571e2.slice - libcontainer container kubepods-besteffort-podd586322b_cbae_4419_b358_96eba1e571e2.slice. Feb 13 20:50:55.917086 systemd[1]: Created slice kubepods-burstable-podf7a2b434_ec01_4de2_9f31_4fa14d12ea17.slice - libcontainer container kubepods-burstable-podf7a2b434_ec01_4de2_9f31_4fa14d12ea17.slice. Feb 13 20:50:55.939337 kubelet[2505]: I0213 20:50:55.939304 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d586322b-cbae-4419-b358-96eba1e571e2-lib-modules\") pod \"kube-proxy-fbjf6\" (UID: \"d586322b-cbae-4419-b358-96eba1e571e2\") " pod="kube-system/kube-proxy-fbjf6" Feb 13 20:50:55.939588 kubelet[2505]: I0213 20:50:55.939480 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pl52\" (UniqueName: \"kubernetes.io/projected/f7a2b434-ec01-4de2-9f31-4fa14d12ea17-kube-api-access-5pl52\") pod \"kube-flannel-ds-vgntp\" (UID: \"f7a2b434-ec01-4de2-9f31-4fa14d12ea17\") " pod="kube-flannel/kube-flannel-ds-vgntp" Feb 13 20:50:55.939588 kubelet[2505]: I0213 20:50:55.939510 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d586322b-cbae-4419-b358-96eba1e571e2-kube-proxy\") pod \"kube-proxy-fbjf6\" (UID: \"d586322b-cbae-4419-b358-96eba1e571e2\") " pod="kube-system/kube-proxy-fbjf6" Feb 13 20:50:55.939588 kubelet[2505]: I0213 20:50:55.939542 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p6vs\" (UniqueName: \"kubernetes.io/projected/d586322b-cbae-4419-b358-96eba1e571e2-kube-api-access-2p6vs\") pod \"kube-proxy-fbjf6\" (UID: \"d586322b-cbae-4419-b358-96eba1e571e2\") " pod="kube-system/kube-proxy-fbjf6" Feb 13 20:50:55.939588 kubelet[2505]: I0213 20:50:55.939560 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/f7a2b434-ec01-4de2-9f31-4fa14d12ea17-cni\") pod \"kube-flannel-ds-vgntp\" (UID: \"f7a2b434-ec01-4de2-9f31-4fa14d12ea17\") " pod="kube-flannel/kube-flannel-ds-vgntp" Feb 13 20:50:55.939785 kubelet[2505]: I0213 20:50:55.939613 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7a2b434-ec01-4de2-9f31-4fa14d12ea17-xtables-lock\") pod \"kube-flannel-ds-vgntp\" (UID: \"f7a2b434-ec01-4de2-9f31-4fa14d12ea17\") " pod="kube-flannel/kube-flannel-ds-vgntp" Feb 13 20:50:55.939785 kubelet[2505]: I0213 20:50:55.939656 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f7a2b434-ec01-4de2-9f31-4fa14d12ea17-run\") pod \"kube-flannel-ds-vgntp\" (UID: \"f7a2b434-ec01-4de2-9f31-4fa14d12ea17\") " pod="kube-flannel/kube-flannel-ds-vgntp" Feb 13 20:50:55.939785 kubelet[2505]: I0213 20:50:55.939680 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/f7a2b434-ec01-4de2-9f31-4fa14d12ea17-flannel-cfg\") pod \"kube-flannel-ds-vgntp\" (UID: \"f7a2b434-ec01-4de2-9f31-4fa14d12ea17\") " pod="kube-flannel/kube-flannel-ds-vgntp" Feb 13 20:50:55.939785 kubelet[2505]: I0213 20:50:55.939730 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d586322b-cbae-4419-b358-96eba1e571e2-xtables-lock\") pod \"kube-proxy-fbjf6\" (UID: \"d586322b-cbae-4419-b358-96eba1e571e2\") " pod="kube-system/kube-proxy-fbjf6" Feb 13 20:50:55.939785 kubelet[2505]: I0213 20:50:55.939764 2505 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/f7a2b434-ec01-4de2-9f31-4fa14d12ea17-cni-plugin\") pod \"kube-flannel-ds-vgntp\" (UID: \"f7a2b434-ec01-4de2-9f31-4fa14d12ea17\") " pod="kube-flannel/kube-flannel-ds-vgntp" Feb 13 20:50:56.220302 kubelet[2505]: E0213 20:50:56.220000 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:56.221107 containerd[1432]: time="2025-02-13T20:50:56.221053987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-vgntp,Uid:f7a2b434-ec01-4de2-9f31-4fa14d12ea17,Namespace:kube-flannel,Attempt:0,}" Feb 13 20:50:56.242145 containerd[1432]: time="2025-02-13T20:50:56.242050781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:50:56.242145 containerd[1432]: time="2025-02-13T20:50:56.242114150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:50:56.242661 containerd[1432]: time="2025-02-13T20:50:56.242438876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:56.244419 containerd[1432]: time="2025-02-13T20:50:56.244356350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:56.266393 systemd[1]: Started cri-containerd-6042f373cfb6bfbcd15459d2f6f654bdff8cdf1a5f721e20d9d0f0092b898813.scope - libcontainer container 6042f373cfb6bfbcd15459d2f6f654bdff8cdf1a5f721e20d9d0f0092b898813. Feb 13 20:50:56.305206 containerd[1432]: time="2025-02-13T20:50:56.305111813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-vgntp,Uid:f7a2b434-ec01-4de2-9f31-4fa14d12ea17,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"6042f373cfb6bfbcd15459d2f6f654bdff8cdf1a5f721e20d9d0f0092b898813\"" Feb 13 20:50:56.307237 kubelet[2505]: E0213 20:50:56.306265 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:56.307852 containerd[1432]: time="2025-02-13T20:50:56.307670457Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:50:57.041801 kubelet[2505]: E0213 20:50:57.041761 2505 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 13 20:50:57.041943 kubelet[2505]: E0213 20:50:57.041873 2505 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d586322b-cbae-4419-b358-96eba1e571e2-kube-proxy podName:d586322b-cbae-4419-b358-96eba1e571e2 nodeName:}" failed. No retries permitted until 2025-02-13 20:50:57.541851679 +0000 UTC m=+16.059473276 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/d586322b-cbae-4419-b358-96eba1e571e2-kube-proxy") pod "kube-proxy-fbjf6" (UID: "d586322b-cbae-4419-b358-96eba1e571e2") : failed to sync configmap cache: timed out waiting for the condition Feb 13 20:50:57.049101 kubelet[2505]: E0213 20:50:57.049055 2505 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 20:50:57.049101 kubelet[2505]: E0213 20:50:57.049094 2505 projected.go:200] Error preparing data for projected volume kube-api-access-2p6vs for pod kube-system/kube-proxy-fbjf6: failed to sync configmap cache: timed out waiting for the condition Feb 13 20:50:57.049278 kubelet[2505]: E0213 20:50:57.049156 2505 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d586322b-cbae-4419-b358-96eba1e571e2-kube-api-access-2p6vs podName:d586322b-cbae-4419-b358-96eba1e571e2 nodeName:}" failed. No retries permitted until 2025-02-13 20:50:57.549139191 +0000 UTC m=+16.066760748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2p6vs" (UniqueName: "kubernetes.io/projected/d586322b-cbae-4419-b358-96eba1e571e2-kube-api-access-2p6vs") pod "kube-proxy-fbjf6" (UID: "d586322b-cbae-4419-b358-96eba1e571e2") : failed to sync configmap cache: timed out waiting for the condition Feb 13 20:50:57.649564 containerd[1432]: time="2025-02-13T20:50:57.649479499Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:50:57.649564 containerd[1432]: time="2025-02-13T20:50:57.649555789Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=13144" Feb 13 20:50:57.650069 kubelet[2505]: E0213 20:50:57.649726 2505 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:50:57.650069 kubelet[2505]: E0213 20:50:57.649796 2505 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:50:57.650374 kubelet[2505]: E0213 20:50:57.649967 2505 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5pl52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-vgntp_kube-flannel(f7a2b434-ec01-4de2-9f31-4fa14d12ea17): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:50:57.650432 kubelet[2505]: E0213 20:50:57.650001 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:50:57.714251 kubelet[2505]: E0213 20:50:57.713974 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:57.714650 containerd[1432]: time="2025-02-13T20:50:57.714591801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fbjf6,Uid:d586322b-cbae-4419-b358-96eba1e571e2,Namespace:kube-system,Attempt:0,}" Feb 13 20:50:57.734824 containerd[1432]: time="2025-02-13T20:50:57.734598884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:50:57.734824 containerd[1432]: time="2025-02-13T20:50:57.734653051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:50:57.734824 containerd[1432]: time="2025-02-13T20:50:57.734663692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:57.734824 containerd[1432]: time="2025-02-13T20:50:57.734745864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:57.761406 systemd[1]: Started cri-containerd-be15f79f548f48a5873688d834d68614e8428d2f72abb66b58c28168f3536109.scope - libcontainer container be15f79f548f48a5873688d834d68614e8428d2f72abb66b58c28168f3536109. Feb 13 20:50:57.783586 containerd[1432]: time="2025-02-13T20:50:57.783475296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fbjf6,Uid:d586322b-cbae-4419-b358-96eba1e571e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"be15f79f548f48a5873688d834d68614e8428d2f72abb66b58c28168f3536109\"" Feb 13 20:50:57.784499 kubelet[2505]: E0213 20:50:57.784450 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:57.787206 containerd[1432]: time="2025-02-13T20:50:57.787112071Z" level=info msg="CreateContainer within sandbox \"be15f79f548f48a5873688d834d68614e8428d2f72abb66b58c28168f3536109\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:50:57.804971 containerd[1432]: time="2025-02-13T20:50:57.804905452Z" level=info msg="CreateContainer within sandbox \"be15f79f548f48a5873688d834d68614e8428d2f72abb66b58c28168f3536109\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bce000519aabf6b6e2cf5839fa67115c9e25f6481f2e4d3f39fae8219aff317e\"" Feb 13 20:50:57.805582 containerd[1432]: time="2025-02-13T20:50:57.805555021Z" level=info msg="StartContainer for \"bce000519aabf6b6e2cf5839fa67115c9e25f6481f2e4d3f39fae8219aff317e\"" Feb 13 20:50:57.836396 systemd[1]: Started cri-containerd-bce000519aabf6b6e2cf5839fa67115c9e25f6481f2e4d3f39fae8219aff317e.scope - libcontainer container bce000519aabf6b6e2cf5839fa67115c9e25f6481f2e4d3f39fae8219aff317e. Feb 13 20:50:57.867486 containerd[1432]: time="2025-02-13T20:50:57.867386596Z" level=info msg="StartContainer for \"bce000519aabf6b6e2cf5839fa67115c9e25f6481f2e4d3f39fae8219aff317e\" returns successfully" Feb 13 20:50:58.602832 kubelet[2505]: E0213 20:50:58.602672 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:58.602832 kubelet[2505]: E0213 20:50:58.602693 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:58.603616 kubelet[2505]: E0213 20:50:58.603251 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:50:58.612675 kubelet[2505]: I0213 20:50:58.612606 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fbjf6" podStartSLOduration=3.612589024 podStartE2EDuration="3.612589024s" podCreationTimestamp="2025-02-13 20:50:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:50:58.612334911 +0000 UTC m=+17.129956508" watchObservedRunningTime="2025-02-13 20:50:58.612589024 +0000 UTC m=+17.130210621" Feb 13 20:51:07.766308 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:38250.service - OpenSSH per-connection server daemon (10.0.0.1:38250). Feb 13 20:51:07.801574 sshd[2830]: Accepted publickey for core from 10.0.0.1 port 38250 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:07.803276 sshd[2830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:07.807494 systemd-logind[1417]: New session 6 of user core. Feb 13 20:51:07.813368 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:51:07.930696 sshd[2830]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:07.934199 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:38250.service: Deactivated successfully. Feb 13 20:51:07.935955 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:51:07.936569 systemd-logind[1417]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:51:07.937392 systemd-logind[1417]: Removed session 6. Feb 13 20:51:09.563558 kubelet[2505]: E0213 20:51:09.563420 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:09.565950 containerd[1432]: time="2025-02-13T20:51:09.565702282Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:51:10.678020 containerd[1432]: time="2025-02-13T20:51:10.677961100Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:51:10.678440 containerd[1432]: time="2025-02-13T20:51:10.678070949Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:51:10.678474 kubelet[2505]: E0213 20:51:10.678229 2505 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:51:10.678474 kubelet[2505]: E0213 20:51:10.678283 2505 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:51:10.678751 kubelet[2505]: E0213 20:51:10.678374 2505 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5pl52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-vgntp_kube-flannel(f7a2b434-ec01-4de2-9f31-4fa14d12ea17): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:51:10.678807 kubelet[2505]: E0213 20:51:10.678406 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:51:12.948342 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:58752.service - OpenSSH per-connection server daemon (10.0.0.1:58752). Feb 13 20:51:12.981439 sshd[2846]: Accepted publickey for core from 10.0.0.1 port 58752 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:12.982625 sshd[2846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:12.986825 systemd-logind[1417]: New session 7 of user core. Feb 13 20:51:12.993408 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:51:13.099600 sshd[2846]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:13.102840 systemd[1]: sshd@6-10.0.0.6:22-10.0.0.1:58752.service: Deactivated successfully. Feb 13 20:51:13.104664 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:51:13.105325 systemd-logind[1417]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:51:13.106229 systemd-logind[1417]: Removed session 7. Feb 13 20:51:18.114697 systemd[1]: Started sshd@7-10.0.0.6:22-10.0.0.1:58754.service - OpenSSH per-connection server daemon (10.0.0.1:58754). Feb 13 20:51:18.146150 sshd[2864]: Accepted publickey for core from 10.0.0.1 port 58754 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:18.147269 sshd[2864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:18.150792 systemd-logind[1417]: New session 8 of user core. Feb 13 20:51:18.159336 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:51:18.268132 sshd[2864]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:18.271600 systemd[1]: sshd@7-10.0.0.6:22-10.0.0.1:58754.service: Deactivated successfully. Feb 13 20:51:18.273494 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:51:18.274111 systemd-logind[1417]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:51:18.274905 systemd-logind[1417]: Removed session 8. Feb 13 20:51:23.278724 systemd[1]: Started sshd@8-10.0.0.6:22-10.0.0.1:35750.service - OpenSSH per-connection server daemon (10.0.0.1:35750). Feb 13 20:51:23.315673 sshd[2879]: Accepted publickey for core from 10.0.0.1 port 35750 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:23.316827 sshd[2879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:23.320379 systemd-logind[1417]: New session 9 of user core. Feb 13 20:51:23.331313 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:51:23.433944 sshd[2879]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:23.437646 systemd[1]: sshd@8-10.0.0.6:22-10.0.0.1:35750.service: Deactivated successfully. Feb 13 20:51:23.439569 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:51:23.440376 systemd-logind[1417]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:51:23.441134 systemd-logind[1417]: Removed session 9. Feb 13 20:51:24.563544 kubelet[2505]: E0213 20:51:24.563454 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:24.564559 kubelet[2505]: E0213 20:51:24.564528 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:51:28.450733 systemd[1]: Started sshd@9-10.0.0.6:22-10.0.0.1:35752.service - OpenSSH per-connection server daemon (10.0.0.1:35752). Feb 13 20:51:28.482599 sshd[2897]: Accepted publickey for core from 10.0.0.1 port 35752 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:28.483764 sshd[2897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:28.487607 systemd-logind[1417]: New session 10 of user core. Feb 13 20:51:28.494368 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:51:28.598404 sshd[2897]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:28.601802 systemd[1]: sshd@9-10.0.0.6:22-10.0.0.1:35752.service: Deactivated successfully. Feb 13 20:51:28.603564 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:51:28.605002 systemd-logind[1417]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:51:28.605924 systemd-logind[1417]: Removed session 10. Feb 13 20:51:33.608959 systemd[1]: Started sshd@10-10.0.0.6:22-10.0.0.1:41536.service - OpenSSH per-connection server daemon (10.0.0.1:41536). Feb 13 20:51:33.640628 sshd[2912]: Accepted publickey for core from 10.0.0.1 port 41536 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:33.642462 sshd[2912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:33.646419 systemd-logind[1417]: New session 11 of user core. Feb 13 20:51:33.655400 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:51:33.761653 sshd[2912]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:33.764815 systemd[1]: sshd@10-10.0.0.6:22-10.0.0.1:41536.service: Deactivated successfully. Feb 13 20:51:33.766335 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:51:33.767518 systemd-logind[1417]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:51:33.768298 systemd-logind[1417]: Removed session 11. Feb 13 20:51:36.562846 kubelet[2505]: E0213 20:51:36.562761 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:36.563725 containerd[1432]: time="2025-02-13T20:51:36.563612212Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:51:37.671218 containerd[1432]: time="2025-02-13T20:51:37.671155644Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:51:37.671595 containerd[1432]: time="2025-02-13T20:51:37.671213127Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:51:37.671643 kubelet[2505]: E0213 20:51:37.671354 2505 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:51:37.671643 kubelet[2505]: E0213 20:51:37.671397 2505 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:51:37.671908 kubelet[2505]: E0213 20:51:37.671468 2505 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5pl52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-vgntp_kube-flannel(f7a2b434-ec01-4de2-9f31-4fa14d12ea17): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:51:37.671986 kubelet[2505]: E0213 20:51:37.671495 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:51:38.772742 systemd[1]: Started sshd@11-10.0.0.6:22-10.0.0.1:41544.service - OpenSSH per-connection server daemon (10.0.0.1:41544). Feb 13 20:51:38.804009 sshd[2928]: Accepted publickey for core from 10.0.0.1 port 41544 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:38.805155 sshd[2928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:38.808505 systemd-logind[1417]: New session 12 of user core. Feb 13 20:51:38.820385 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:51:38.922323 sshd[2928]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:38.925340 systemd[1]: sshd@11-10.0.0.6:22-10.0.0.1:41544.service: Deactivated successfully. Feb 13 20:51:38.927647 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:51:38.928603 systemd-logind[1417]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:51:38.929580 systemd-logind[1417]: Removed session 12. Feb 13 20:51:43.936824 systemd[1]: Started sshd@12-10.0.0.6:22-10.0.0.1:58166.service - OpenSSH per-connection server daemon (10.0.0.1:58166). Feb 13 20:51:43.968681 sshd[2945]: Accepted publickey for core from 10.0.0.1 port 58166 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:43.969883 sshd[2945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:43.973817 systemd-logind[1417]: New session 13 of user core. Feb 13 20:51:43.980323 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:51:44.088517 sshd[2945]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:44.091615 systemd[1]: sshd@12-10.0.0.6:22-10.0.0.1:58166.service: Deactivated successfully. Feb 13 20:51:44.094379 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:51:44.095057 systemd-logind[1417]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:51:44.096011 systemd-logind[1417]: Removed session 13. Feb 13 20:51:48.563515 kubelet[2505]: E0213 20:51:48.563432 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:48.564584 kubelet[2505]: E0213 20:51:48.563994 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:51:49.102655 systemd[1]: Started sshd@13-10.0.0.6:22-10.0.0.1:58178.service - OpenSSH per-connection server daemon (10.0.0.1:58178). Feb 13 20:51:49.136001 sshd[2961]: Accepted publickey for core from 10.0.0.1 port 58178 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:49.137160 sshd[2961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:49.140832 systemd-logind[1417]: New session 14 of user core. Feb 13 20:51:49.147379 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:51:49.251244 sshd[2961]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:49.253853 systemd[1]: sshd@13-10.0.0.6:22-10.0.0.1:58178.service: Deactivated successfully. Feb 13 20:51:49.255472 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:51:49.256776 systemd-logind[1417]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:51:49.258244 systemd-logind[1417]: Removed session 14. Feb 13 20:51:54.262098 systemd[1]: Started sshd@14-10.0.0.6:22-10.0.0.1:58484.service - OpenSSH per-connection server daemon (10.0.0.1:58484). Feb 13 20:51:54.294167 sshd[2977]: Accepted publickey for core from 10.0.0.1 port 58484 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:54.295308 sshd[2977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:54.299145 systemd-logind[1417]: New session 15 of user core. Feb 13 20:51:54.311373 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:51:54.415507 sshd[2977]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:54.418030 systemd[1]: sshd@14-10.0.0.6:22-10.0.0.1:58484.service: Deactivated successfully. Feb 13 20:51:54.419660 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:51:54.420990 systemd-logind[1417]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:51:54.422237 systemd-logind[1417]: Removed session 15. Feb 13 20:51:55.563448 kubelet[2505]: E0213 20:51:55.563407 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:59.425963 systemd[1]: Started sshd@15-10.0.0.6:22-10.0.0.1:58490.service - OpenSSH per-connection server daemon (10.0.0.1:58490). Feb 13 20:51:59.458076 sshd[2995]: Accepted publickey for core from 10.0.0.1 port 58490 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:59.459262 sshd[2995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:59.463238 systemd-logind[1417]: New session 16 of user core. Feb 13 20:51:59.477325 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:51:59.583619 sshd[2995]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:59.586701 systemd[1]: sshd@15-10.0.0.6:22-10.0.0.1:58490.service: Deactivated successfully. Feb 13 20:51:59.588293 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:51:59.588821 systemd-logind[1417]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:51:59.591688 systemd-logind[1417]: Removed session 16. Feb 13 20:52:00.563607 kubelet[2505]: E0213 20:52:00.563572 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:00.564492 kubelet[2505]: E0213 20:52:00.564443 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:52:02.563502 kubelet[2505]: E0213 20:52:02.563457 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:04.593732 systemd[1]: Started sshd@16-10.0.0.6:22-10.0.0.1:44352.service - OpenSSH per-connection server daemon (10.0.0.1:44352). Feb 13 20:52:04.625050 sshd[3011]: Accepted publickey for core from 10.0.0.1 port 44352 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:04.626227 sshd[3011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:04.629908 systemd-logind[1417]: New session 17 of user core. Feb 13 20:52:04.640370 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:52:04.745344 sshd[3011]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:04.748750 systemd[1]: sshd@16-10.0.0.6:22-10.0.0.1:44352.service: Deactivated successfully. Feb 13 20:52:04.750281 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:52:04.751961 systemd-logind[1417]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:52:04.753291 systemd-logind[1417]: Removed session 17. Feb 13 20:52:09.563966 kubelet[2505]: E0213 20:52:09.563927 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:09.758520 systemd[1]: Started sshd@17-10.0.0.6:22-10.0.0.1:44366.service - OpenSSH per-connection server daemon (10.0.0.1:44366). Feb 13 20:52:09.789920 sshd[3026]: Accepted publickey for core from 10.0.0.1 port 44366 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:09.791053 sshd[3026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:09.794541 systemd-logind[1417]: New session 18 of user core. Feb 13 20:52:09.804365 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:52:09.907367 sshd[3026]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:09.910618 systemd[1]: sshd@17-10.0.0.6:22-10.0.0.1:44366.service: Deactivated successfully. Feb 13 20:52:09.913602 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:52:09.914210 systemd-logind[1417]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:52:09.914996 systemd-logind[1417]: Removed session 18. Feb 13 20:52:12.563415 kubelet[2505]: E0213 20:52:12.563382 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:12.564131 kubelet[2505]: E0213 20:52:12.564072 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:52:14.917548 systemd[1]: Started sshd@18-10.0.0.6:22-10.0.0.1:45820.service - OpenSSH per-connection server daemon (10.0.0.1:45820). Feb 13 20:52:14.948981 sshd[3041]: Accepted publickey for core from 10.0.0.1 port 45820 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:14.950395 sshd[3041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:14.954013 systemd-logind[1417]: New session 19 of user core. Feb 13 20:52:14.964319 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:52:15.070106 sshd[3041]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:15.073351 systemd[1]: sshd@18-10.0.0.6:22-10.0.0.1:45820.service: Deactivated successfully. Feb 13 20:52:15.074948 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:52:15.076337 systemd-logind[1417]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:52:15.077152 systemd-logind[1417]: Removed session 19. Feb 13 20:52:16.562940 kubelet[2505]: E0213 20:52:16.562899 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:20.080810 systemd[1]: Started sshd@19-10.0.0.6:22-10.0.0.1:45832.service - OpenSSH per-connection server daemon (10.0.0.1:45832). Feb 13 20:52:20.113464 sshd[3056]: Accepted publickey for core from 10.0.0.1 port 45832 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:20.114625 sshd[3056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:20.118626 systemd-logind[1417]: New session 20 of user core. Feb 13 20:52:20.129328 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:52:20.237073 sshd[3056]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:20.240314 systemd[1]: sshd@19-10.0.0.6:22-10.0.0.1:45832.service: Deactivated successfully. Feb 13 20:52:20.242521 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:52:20.243238 systemd-logind[1417]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:52:20.244039 systemd-logind[1417]: Removed session 20. Feb 13 20:52:24.563621 kubelet[2505]: E0213 20:52:24.563503 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:24.564658 containerd[1432]: time="2025-02-13T20:52:24.564266956Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:52:25.248742 systemd[1]: Started sshd@20-10.0.0.6:22-10.0.0.1:33014.service - OpenSSH per-connection server daemon (10.0.0.1:33014). Feb 13 20:52:25.280136 sshd[3072]: Accepted publickey for core from 10.0.0.1 port 33014 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:25.281278 sshd[3072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:25.284762 systemd-logind[1417]: New session 21 of user core. Feb 13 20:52:25.294323 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:52:25.401901 sshd[3072]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:25.405122 systemd[1]: sshd@20-10.0.0.6:22-10.0.0.1:33014.service: Deactivated successfully. Feb 13 20:52:25.407624 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:52:25.408675 systemd-logind[1417]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:52:25.409623 systemd-logind[1417]: Removed session 21. Feb 13 20:52:25.693813 containerd[1432]: time="2025-02-13T20:52:25.693724027Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:52:25.693813 containerd[1432]: time="2025-02-13T20:52:25.693777708Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:52:25.694304 kubelet[2505]: E0213 20:52:25.693944 2505 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:52:25.694304 kubelet[2505]: E0213 20:52:25.693991 2505 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:52:25.694553 kubelet[2505]: E0213 20:52:25.694078 2505 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5pl52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-vgntp_kube-flannel(f7a2b434-ec01-4de2-9f31-4fa14d12ea17): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:52:25.694606 kubelet[2505]: E0213 20:52:25.694107 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:52:30.415737 systemd[1]: Started sshd@21-10.0.0.6:22-10.0.0.1:33020.service - OpenSSH per-connection server daemon (10.0.0.1:33020). Feb 13 20:52:30.447704 sshd[3089]: Accepted publickey for core from 10.0.0.1 port 33020 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:30.448887 sshd[3089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:30.452792 systemd-logind[1417]: New session 22 of user core. Feb 13 20:52:30.462338 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:52:30.568601 sshd[3089]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:30.571848 systemd[1]: sshd@21-10.0.0.6:22-10.0.0.1:33020.service: Deactivated successfully. Feb 13 20:52:30.573466 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:52:30.574045 systemd-logind[1417]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:52:30.574902 systemd-logind[1417]: Removed session 22. Feb 13 20:52:35.584712 systemd[1]: Started sshd@22-10.0.0.6:22-10.0.0.1:49490.service - OpenSSH per-connection server daemon (10.0.0.1:49490). Feb 13 20:52:35.616432 sshd[3104]: Accepted publickey for core from 10.0.0.1 port 49490 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:35.617658 sshd[3104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:35.621162 systemd-logind[1417]: New session 23 of user core. Feb 13 20:52:35.627385 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:52:35.733644 sshd[3104]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:35.737075 systemd[1]: sshd@22-10.0.0.6:22-10.0.0.1:49490.service: Deactivated successfully. Feb 13 20:52:35.739742 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:52:35.740621 systemd-logind[1417]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:52:35.741561 systemd-logind[1417]: Removed session 23. Feb 13 20:52:38.563612 kubelet[2505]: E0213 20:52:38.563560 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:38.564540 kubelet[2505]: E0213 20:52:38.564291 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:52:40.746828 systemd[1]: Started sshd@23-10.0.0.6:22-10.0.0.1:49498.service - OpenSSH per-connection server daemon (10.0.0.1:49498). Feb 13 20:52:40.795261 sshd[3120]: Accepted publickey for core from 10.0.0.1 port 49498 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:40.796460 sshd[3120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:40.800098 systemd-logind[1417]: New session 24 of user core. Feb 13 20:52:40.808383 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:52:40.915411 sshd[3120]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:40.918619 systemd[1]: sshd@23-10.0.0.6:22-10.0.0.1:49498.service: Deactivated successfully. Feb 13 20:52:40.920967 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:52:40.922014 systemd-logind[1417]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:52:40.922891 systemd-logind[1417]: Removed session 24. Feb 13 20:52:41.580194 kubelet[2505]: E0213 20:52:41.580148 2505 kubelet_node_status.go:456] "Node not becoming ready in time after startup" Feb 13 20:52:41.619539 kubelet[2505]: E0213 20:52:41.619501 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:52:45.929843 systemd[1]: Started sshd@24-10.0.0.6:22-10.0.0.1:44608.service - OpenSSH per-connection server daemon (10.0.0.1:44608). Feb 13 20:52:45.963049 sshd[3137]: Accepted publickey for core from 10.0.0.1 port 44608 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:45.964291 sshd[3137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:45.968148 systemd-logind[1417]: New session 25 of user core. Feb 13 20:52:45.982325 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:52:46.086860 sshd[3137]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:46.090476 systemd[1]: sshd@24-10.0.0.6:22-10.0.0.1:44608.service: Deactivated successfully. Feb 13 20:52:46.093027 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:52:46.093639 systemd-logind[1417]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:52:46.094659 systemd-logind[1417]: Removed session 25. Feb 13 20:52:46.620548 kubelet[2505]: E0213 20:52:46.620505 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:52:49.563783 kubelet[2505]: E0213 20:52:49.563745 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:49.564920 kubelet[2505]: E0213 20:52:49.564383 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:52:51.096734 systemd[1]: Started sshd@25-10.0.0.6:22-10.0.0.1:44624.service - OpenSSH per-connection server daemon (10.0.0.1:44624). Feb 13 20:52:51.128285 sshd[3152]: Accepted publickey for core from 10.0.0.1 port 44624 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:51.129453 sshd[3152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:51.133243 systemd-logind[1417]: New session 26 of user core. Feb 13 20:52:51.140315 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:52:51.244875 sshd[3152]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:51.247897 systemd[1]: sshd@25-10.0.0.6:22-10.0.0.1:44624.service: Deactivated successfully. Feb 13 20:52:51.249582 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:52:51.250142 systemd-logind[1417]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:52:51.250946 systemd-logind[1417]: Removed session 26. Feb 13 20:52:51.621523 kubelet[2505]: E0213 20:52:51.621479 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:52:56.255605 systemd[1]: Started sshd@26-10.0.0.6:22-10.0.0.1:35834.service - OpenSSH per-connection server daemon (10.0.0.1:35834). Feb 13 20:52:56.287140 sshd[3168]: Accepted publickey for core from 10.0.0.1 port 35834 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:56.288298 sshd[3168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:56.291535 systemd-logind[1417]: New session 27 of user core. Feb 13 20:52:56.302333 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:52:56.405785 sshd[3168]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:56.409033 systemd[1]: sshd@26-10.0.0.6:22-10.0.0.1:35834.service: Deactivated successfully. Feb 13 20:52:56.410771 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:52:56.411374 systemd-logind[1417]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:52:56.412157 systemd-logind[1417]: Removed session 27. Feb 13 20:52:56.622535 kubelet[2505]: E0213 20:52:56.622492 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:01.416705 systemd[1]: Started sshd@27-10.0.0.6:22-10.0.0.1:35836.service - OpenSSH per-connection server daemon (10.0.0.1:35836). Feb 13 20:53:01.450012 sshd[3186]: Accepted publickey for core from 10.0.0.1 port 35836 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:01.451155 sshd[3186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:01.454804 systemd-logind[1417]: New session 28 of user core. Feb 13 20:53:01.467308 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:53:01.573758 sshd[3186]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:01.576930 systemd[1]: sshd@27-10.0.0.6:22-10.0.0.1:35836.service: Deactivated successfully. Feb 13 20:53:01.578672 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:53:01.579257 systemd-logind[1417]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:53:01.579995 systemd-logind[1417]: Removed session 28. Feb 13 20:53:01.623373 kubelet[2505]: E0213 20:53:01.623341 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:02.563019 kubelet[2505]: E0213 20:53:02.562976 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:02.563722 kubelet[2505]: E0213 20:53:02.563691 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:53:05.563678 kubelet[2505]: E0213 20:53:05.563539 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:06.584740 systemd[1]: Started sshd@28-10.0.0.6:22-10.0.0.1:41806.service - OpenSSH per-connection server daemon (10.0.0.1:41806). Feb 13 20:53:06.616601 sshd[3201]: Accepted publickey for core from 10.0.0.1 port 41806 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:06.617731 sshd[3201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:06.621698 systemd-logind[1417]: New session 29 of user core. Feb 13 20:53:06.624995 kubelet[2505]: E0213 20:53:06.624955 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:06.630317 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:53:06.738573 sshd[3201]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:06.741239 systemd[1]: sshd@28-10.0.0.6:22-10.0.0.1:41806.service: Deactivated successfully. Feb 13 20:53:06.742755 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:53:06.743924 systemd-logind[1417]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:53:06.744893 systemd-logind[1417]: Removed session 29. Feb 13 20:53:11.626107 kubelet[2505]: E0213 20:53:11.626060 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:11.751957 systemd[1]: Started sshd@29-10.0.0.6:22-10.0.0.1:41814.service - OpenSSH per-connection server daemon (10.0.0.1:41814). Feb 13 20:53:11.785651 sshd[3218]: Accepted publickey for core from 10.0.0.1 port 41814 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:11.787025 sshd[3218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:11.792021 systemd-logind[1417]: New session 30 of user core. Feb 13 20:53:11.802321 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 20:53:11.910044 sshd[3218]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:11.913224 systemd[1]: sshd@29-10.0.0.6:22-10.0.0.1:41814.service: Deactivated successfully. Feb 13 20:53:11.914902 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 20:53:11.916157 systemd-logind[1417]: Session 30 logged out. Waiting for processes to exit. Feb 13 20:53:11.917070 systemd-logind[1417]: Removed session 30. Feb 13 20:53:16.563893 kubelet[2505]: E0213 20:53:16.563729 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:16.564765 kubelet[2505]: E0213 20:53:16.564527 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:53:16.627502 kubelet[2505]: E0213 20:53:16.627474 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:16.921525 systemd[1]: Started sshd@30-10.0.0.6:22-10.0.0.1:35884.service - OpenSSH per-connection server daemon (10.0.0.1:35884). Feb 13 20:53:16.953296 sshd[3235]: Accepted publickey for core from 10.0.0.1 port 35884 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:16.954399 sshd[3235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:16.957873 systemd-logind[1417]: New session 31 of user core. Feb 13 20:53:16.966391 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 20:53:17.072991 sshd[3235]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:17.076083 systemd[1]: sshd@30-10.0.0.6:22-10.0.0.1:35884.service: Deactivated successfully. Feb 13 20:53:17.078347 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 20:53:17.078894 systemd-logind[1417]: Session 31 logged out. Waiting for processes to exit. Feb 13 20:53:17.079720 systemd-logind[1417]: Removed session 31. Feb 13 20:53:21.628780 kubelet[2505]: E0213 20:53:21.628740 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:22.083607 systemd[1]: Started sshd@31-10.0.0.6:22-10.0.0.1:35894.service - OpenSSH per-connection server daemon (10.0.0.1:35894). Feb 13 20:53:22.114781 sshd[3250]: Accepted publickey for core from 10.0.0.1 port 35894 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:22.115987 sshd[3250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:22.120054 systemd-logind[1417]: New session 32 of user core. Feb 13 20:53:22.128398 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 20:53:22.234339 sshd[3250]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:22.237572 systemd[1]: sshd@31-10.0.0.6:22-10.0.0.1:35894.service: Deactivated successfully. Feb 13 20:53:22.239675 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 20:53:22.240357 systemd-logind[1417]: Session 32 logged out. Waiting for processes to exit. Feb 13 20:53:22.241082 systemd-logind[1417]: Removed session 32. Feb 13 20:53:26.629798 kubelet[2505]: E0213 20:53:26.629754 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:27.244783 systemd[1]: Started sshd@32-10.0.0.6:22-10.0.0.1:60990.service - OpenSSH per-connection server daemon (10.0.0.1:60990). Feb 13 20:53:27.277132 sshd[3266]: Accepted publickey for core from 10.0.0.1 port 60990 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:27.278315 sshd[3266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:27.281713 systemd-logind[1417]: New session 33 of user core. Feb 13 20:53:27.298324 systemd[1]: Started session-33.scope - Session 33 of User core. Feb 13 20:53:27.404001 sshd[3266]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:27.407117 systemd[1]: sshd@32-10.0.0.6:22-10.0.0.1:60990.service: Deactivated successfully. Feb 13 20:53:27.408762 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 20:53:27.409344 systemd-logind[1417]: Session 33 logged out. Waiting for processes to exit. Feb 13 20:53:27.410125 systemd-logind[1417]: Removed session 33. Feb 13 20:53:27.562882 kubelet[2505]: E0213 20:53:27.562856 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:27.563552 kubelet[2505]: E0213 20:53:27.563527 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:53:28.563647 kubelet[2505]: E0213 20:53:28.563601 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:31.630970 kubelet[2505]: E0213 20:53:31.630864 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:32.427444 systemd[1]: Started sshd@33-10.0.0.6:22-10.0.0.1:60996.service - OpenSSH per-connection server daemon (10.0.0.1:60996). Feb 13 20:53:32.454889 sshd[3284]: Accepted publickey for core from 10.0.0.1 port 60996 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:32.456069 sshd[3284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:32.459564 systemd-logind[1417]: New session 34 of user core. Feb 13 20:53:32.472314 systemd[1]: Started session-34.scope - Session 34 of User core. Feb 13 20:53:32.578320 sshd[3284]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:32.581333 systemd[1]: sshd@33-10.0.0.6:22-10.0.0.1:60996.service: Deactivated successfully. Feb 13 20:53:32.583859 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 20:53:32.584819 systemd-logind[1417]: Session 34 logged out. Waiting for processes to exit. Feb 13 20:53:32.585707 systemd-logind[1417]: Removed session 34. Feb 13 20:53:36.563553 kubelet[2505]: E0213 20:53:36.563468 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:36.563553 kubelet[2505]: E0213 20:53:36.563483 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:36.632260 kubelet[2505]: E0213 20:53:36.632233 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:37.588660 systemd[1]: Started sshd@34-10.0.0.6:22-10.0.0.1:57608.service - OpenSSH per-connection server daemon (10.0.0.1:57608). Feb 13 20:53:37.620167 sshd[3300]: Accepted publickey for core from 10.0.0.1 port 57608 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:37.621301 sshd[3300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:37.624549 systemd-logind[1417]: New session 35 of user core. Feb 13 20:53:37.634378 systemd[1]: Started session-35.scope - Session 35 of User core. Feb 13 20:53:37.739200 sshd[3300]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:37.742425 systemd[1]: sshd@34-10.0.0.6:22-10.0.0.1:57608.service: Deactivated successfully. Feb 13 20:53:37.745168 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 20:53:37.746381 systemd-logind[1417]: Session 35 logged out. Waiting for processes to exit. Feb 13 20:53:37.747308 systemd-logind[1417]: Removed session 35. Feb 13 20:53:41.563628 kubelet[2505]: E0213 20:53:41.563592 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:41.564445 kubelet[2505]: E0213 20:53:41.564168 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:53:41.633528 kubelet[2505]: E0213 20:53:41.633495 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:42.750848 systemd[1]: Started sshd@35-10.0.0.6:22-10.0.0.1:51834.service - OpenSSH per-connection server daemon (10.0.0.1:51834). Feb 13 20:53:42.782147 sshd[3317]: Accepted publickey for core from 10.0.0.1 port 51834 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:42.783312 sshd[3317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:42.787152 systemd-logind[1417]: New session 36 of user core. Feb 13 20:53:42.807330 systemd[1]: Started session-36.scope - Session 36 of User core. Feb 13 20:53:42.912964 sshd[3317]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:42.915992 systemd[1]: sshd@35-10.0.0.6:22-10.0.0.1:51834.service: Deactivated successfully. Feb 13 20:53:42.917564 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 20:53:42.919393 systemd-logind[1417]: Session 36 logged out. Waiting for processes to exit. Feb 13 20:53:42.920287 systemd-logind[1417]: Removed session 36. Feb 13 20:53:46.634265 kubelet[2505]: E0213 20:53:46.634224 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:47.923698 systemd[1]: Started sshd@36-10.0.0.6:22-10.0.0.1:51848.service - OpenSSH per-connection server daemon (10.0.0.1:51848). Feb 13 20:53:47.955420 sshd[3333]: Accepted publickey for core from 10.0.0.1 port 51848 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:47.956580 sshd[3333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:47.960019 systemd-logind[1417]: New session 37 of user core. Feb 13 20:53:47.968335 systemd[1]: Started session-37.scope - Session 37 of User core. Feb 13 20:53:48.072663 sshd[3333]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:48.075009 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 20:53:48.075612 systemd[1]: sshd@36-10.0.0.6:22-10.0.0.1:51848.service: Deactivated successfully. Feb 13 20:53:48.078027 systemd-logind[1417]: Session 37 logged out. Waiting for processes to exit. Feb 13 20:53:48.078928 systemd-logind[1417]: Removed session 37. Feb 13 20:53:51.635763 kubelet[2505]: E0213 20:53:51.635728 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:53.083613 systemd[1]: Started sshd@37-10.0.0.6:22-10.0.0.1:42956.service - OpenSSH per-connection server daemon (10.0.0.1:42956). Feb 13 20:53:53.115851 sshd[3348]: Accepted publickey for core from 10.0.0.1 port 42956 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:53.116991 sshd[3348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:53.120739 systemd-logind[1417]: New session 38 of user core. Feb 13 20:53:53.132304 systemd[1]: Started session-38.scope - Session 38 of User core. Feb 13 20:53:53.236382 sshd[3348]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:53.239109 systemd-logind[1417]: Session 38 logged out. Waiting for processes to exit. Feb 13 20:53:53.239378 systemd[1]: sshd@37-10.0.0.6:22-10.0.0.1:42956.service: Deactivated successfully. Feb 13 20:53:53.240997 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 20:53:53.242415 systemd-logind[1417]: Removed session 38. Feb 13 20:53:55.563590 kubelet[2505]: E0213 20:53:55.563543 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:55.564572 containerd[1432]: time="2025-02-13T20:53:55.564524393Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:53:56.637753 kubelet[2505]: E0213 20:53:56.637688 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:56.675276 containerd[1432]: time="2025-02-13T20:53:56.675224435Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:53:56.675535 containerd[1432]: time="2025-02-13T20:53:56.675302510Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:53:56.675575 kubelet[2505]: E0213 20:53:56.675426 2505 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:53:56.675575 kubelet[2505]: E0213 20:53:56.675463 2505 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:53:56.675680 kubelet[2505]: E0213 20:53:56.675564 2505 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5pl52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-vgntp_kube-flannel(f7a2b434-ec01-4de2-9f31-4fa14d12ea17): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:53:56.675731 kubelet[2505]: E0213 20:53:56.675628 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:53:58.246645 systemd[1]: Started sshd@38-10.0.0.6:22-10.0.0.1:42964.service - OpenSSH per-connection server daemon (10.0.0.1:42964). Feb 13 20:53:58.278232 sshd[3365]: Accepted publickey for core from 10.0.0.1 port 42964 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:58.279357 sshd[3365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:58.282837 systemd-logind[1417]: New session 39 of user core. Feb 13 20:53:58.289306 systemd[1]: Started session-39.scope - Session 39 of User core. Feb 13 20:53:58.392420 sshd[3365]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:58.395584 systemd[1]: sshd@38-10.0.0.6:22-10.0.0.1:42964.service: Deactivated successfully. Feb 13 20:53:58.397427 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 20:53:58.398093 systemd-logind[1417]: Session 39 logged out. Waiting for processes to exit. Feb 13 20:53:58.399054 systemd-logind[1417]: Removed session 39. Feb 13 20:54:01.638934 kubelet[2505]: E0213 20:54:01.638891 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:03.402765 systemd[1]: Started sshd@39-10.0.0.6:22-10.0.0.1:37076.service - OpenSSH per-connection server daemon (10.0.0.1:37076). Feb 13 20:54:03.434151 sshd[3380]: Accepted publickey for core from 10.0.0.1 port 37076 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:03.435280 sshd[3380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:03.438585 systemd-logind[1417]: New session 40 of user core. Feb 13 20:54:03.450387 systemd[1]: Started session-40.scope - Session 40 of User core. Feb 13 20:54:03.553230 sshd[3380]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:03.556615 systemd[1]: sshd@39-10.0.0.6:22-10.0.0.1:37076.service: Deactivated successfully. Feb 13 20:54:03.558840 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 20:54:03.559556 systemd-logind[1417]: Session 40 logged out. Waiting for processes to exit. Feb 13 20:54:03.560415 systemd-logind[1417]: Removed session 40. Feb 13 20:54:06.640048 kubelet[2505]: E0213 20:54:06.640011 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:07.563603 kubelet[2505]: E0213 20:54:07.563557 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:08.563656 systemd[1]: Started sshd@40-10.0.0.6:22-10.0.0.1:37082.service - OpenSSH per-connection server daemon (10.0.0.1:37082). Feb 13 20:54:08.595695 sshd[3396]: Accepted publickey for core from 10.0.0.1 port 37082 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:08.596853 sshd[3396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:08.600829 systemd-logind[1417]: New session 41 of user core. Feb 13 20:54:08.609314 systemd[1]: Started session-41.scope - Session 41 of User core. Feb 13 20:54:08.713919 sshd[3396]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:08.723550 systemd[1]: sshd@40-10.0.0.6:22-10.0.0.1:37082.service: Deactivated successfully. Feb 13 20:54:08.725109 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 20:54:08.727428 systemd-logind[1417]: Session 41 logged out. Waiting for processes to exit. Feb 13 20:54:08.734653 systemd[1]: Started sshd@41-10.0.0.6:22-10.0.0.1:37090.service - OpenSSH per-connection server daemon (10.0.0.1:37090). Feb 13 20:54:08.735716 systemd-logind[1417]: Removed session 41. Feb 13 20:54:08.762083 sshd[3411]: Accepted publickey for core from 10.0.0.1 port 37090 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:08.763235 sshd[3411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:08.767248 systemd-logind[1417]: New session 42 of user core. Feb 13 20:54:08.778323 systemd[1]: Started session-42.scope - Session 42 of User core. Feb 13 20:54:08.916460 sshd[3411]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:08.926028 systemd[1]: sshd@41-10.0.0.6:22-10.0.0.1:37090.service: Deactivated successfully. Feb 13 20:54:08.928335 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 20:54:08.931470 systemd-logind[1417]: Session 42 logged out. Waiting for processes to exit. Feb 13 20:54:08.940490 systemd[1]: Started sshd@42-10.0.0.6:22-10.0.0.1:37094.service - OpenSSH per-connection server daemon (10.0.0.1:37094). Feb 13 20:54:08.941384 systemd-logind[1417]: Removed session 42. Feb 13 20:54:08.969601 sshd[3423]: Accepted publickey for core from 10.0.0.1 port 37094 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:08.970827 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:08.974992 systemd-logind[1417]: New session 43 of user core. Feb 13 20:54:08.983336 systemd[1]: Started session-43.scope - Session 43 of User core. Feb 13 20:54:09.090201 sshd[3423]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:09.093440 systemd[1]: sshd@42-10.0.0.6:22-10.0.0.1:37094.service: Deactivated successfully. Feb 13 20:54:09.095212 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 20:54:09.095756 systemd-logind[1417]: Session 43 logged out. Waiting for processes to exit. Feb 13 20:54:09.096697 systemd-logind[1417]: Removed session 43. Feb 13 20:54:09.563116 kubelet[2505]: E0213 20:54:09.562958 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:09.564374 kubelet[2505]: E0213 20:54:09.564270 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:54:11.640903 kubelet[2505]: E0213 20:54:11.640841 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:14.109782 systemd[1]: Started sshd@43-10.0.0.6:22-10.0.0.1:44364.service - OpenSSH per-connection server daemon (10.0.0.1:44364). Feb 13 20:54:14.141421 sshd[3437]: Accepted publickey for core from 10.0.0.1 port 44364 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:14.142575 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:14.145917 systemd-logind[1417]: New session 44 of user core. Feb 13 20:54:14.157318 systemd[1]: Started session-44.scope - Session 44 of User core. Feb 13 20:54:14.262567 sshd[3437]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:14.265708 systemd[1]: sshd@43-10.0.0.6:22-10.0.0.1:44364.service: Deactivated successfully. Feb 13 20:54:14.267471 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 20:54:14.268859 systemd-logind[1417]: Session 44 logged out. Waiting for processes to exit. Feb 13 20:54:14.270389 systemd-logind[1417]: Removed session 44. Feb 13 20:54:16.642569 kubelet[2505]: E0213 20:54:16.642518 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:19.276948 systemd[1]: Started sshd@44-10.0.0.6:22-10.0.0.1:44376.service - OpenSSH per-connection server daemon (10.0.0.1:44376). Feb 13 20:54:19.308568 sshd[3452]: Accepted publickey for core from 10.0.0.1 port 44376 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:19.309804 sshd[3452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:19.313960 systemd-logind[1417]: New session 45 of user core. Feb 13 20:54:19.323340 systemd[1]: Started session-45.scope - Session 45 of User core. Feb 13 20:54:19.429147 sshd[3452]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:19.432220 systemd[1]: sshd@44-10.0.0.6:22-10.0.0.1:44376.service: Deactivated successfully. Feb 13 20:54:19.434572 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 20:54:19.435327 systemd-logind[1417]: Session 45 logged out. Waiting for processes to exit. Feb 13 20:54:19.436170 systemd-logind[1417]: Removed session 45. Feb 13 20:54:20.563810 kubelet[2505]: E0213 20:54:20.563612 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:20.564938 kubelet[2505]: E0213 20:54:20.564900 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:54:21.643169 kubelet[2505]: E0213 20:54:21.643125 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:24.441638 systemd[1]: Started sshd@45-10.0.0.6:22-10.0.0.1:41234.service - OpenSSH per-connection server daemon (10.0.0.1:41234). Feb 13 20:54:24.472939 sshd[3466]: Accepted publickey for core from 10.0.0.1 port 41234 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:24.474095 sshd[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:24.477394 systemd-logind[1417]: New session 46 of user core. Feb 13 20:54:24.488387 systemd[1]: Started session-46.scope - Session 46 of User core. Feb 13 20:54:24.591533 sshd[3466]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:24.595267 systemd[1]: sshd@45-10.0.0.6:22-10.0.0.1:41234.service: Deactivated successfully. Feb 13 20:54:24.597481 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 20:54:24.598040 systemd-logind[1417]: Session 46 logged out. Waiting for processes to exit. Feb 13 20:54:24.598806 systemd-logind[1417]: Removed session 46. Feb 13 20:54:26.644310 kubelet[2505]: E0213 20:54:26.644265 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:29.602597 systemd[1]: Started sshd@46-10.0.0.6:22-10.0.0.1:41238.service - OpenSSH per-connection server daemon (10.0.0.1:41238). Feb 13 20:54:29.634390 sshd[3482]: Accepted publickey for core from 10.0.0.1 port 41238 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:29.635518 sshd[3482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:29.639107 systemd-logind[1417]: New session 47 of user core. Feb 13 20:54:29.653371 systemd[1]: Started session-47.scope - Session 47 of User core. Feb 13 20:54:29.757911 sshd[3482]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:29.760967 systemd[1]: sshd@46-10.0.0.6:22-10.0.0.1:41238.service: Deactivated successfully. Feb 13 20:54:29.762610 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 20:54:29.763192 systemd-logind[1417]: Session 47 logged out. Waiting for processes to exit. Feb 13 20:54:29.763944 systemd-logind[1417]: Removed session 47. Feb 13 20:54:31.645486 kubelet[2505]: E0213 20:54:31.645414 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:33.563244 kubelet[2505]: E0213 20:54:33.563127 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:33.563882 kubelet[2505]: E0213 20:54:33.563836 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:54:34.768557 systemd[1]: Started sshd@47-10.0.0.6:22-10.0.0.1:38668.service - OpenSSH per-connection server daemon (10.0.0.1:38668). Feb 13 20:54:34.799795 sshd[3496]: Accepted publickey for core from 10.0.0.1 port 38668 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:34.800886 sshd[3496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:34.804789 systemd-logind[1417]: New session 48 of user core. Feb 13 20:54:34.814305 systemd[1]: Started session-48.scope - Session 48 of User core. Feb 13 20:54:34.918945 sshd[3496]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:34.921970 systemd[1]: sshd@47-10.0.0.6:22-10.0.0.1:38668.service: Deactivated successfully. Feb 13 20:54:34.923650 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 20:54:34.924301 systemd-logind[1417]: Session 48 logged out. Waiting for processes to exit. Feb 13 20:54:34.925362 systemd-logind[1417]: Removed session 48. Feb 13 20:54:36.647236 kubelet[2505]: E0213 20:54:36.647158 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:39.929610 systemd[1]: Started sshd@48-10.0.0.6:22-10.0.0.1:38670.service - OpenSSH per-connection server daemon (10.0.0.1:38670). Feb 13 20:54:39.961024 sshd[3511]: Accepted publickey for core from 10.0.0.1 port 38670 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:39.962147 sshd[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:39.965526 systemd-logind[1417]: New session 49 of user core. Feb 13 20:54:39.981381 systemd[1]: Started session-49.scope - Session 49 of User core. Feb 13 20:54:40.088281 sshd[3511]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:40.091558 systemd[1]: sshd@48-10.0.0.6:22-10.0.0.1:38670.service: Deactivated successfully. Feb 13 20:54:40.094246 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 20:54:40.094942 systemd-logind[1417]: Session 49 logged out. Waiting for processes to exit. Feb 13 20:54:40.095776 systemd-logind[1417]: Removed session 49. Feb 13 20:54:41.648390 kubelet[2505]: E0213 20:54:41.648351 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:44.563494 kubelet[2505]: E0213 20:54:44.563463 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:45.098670 systemd[1]: Started sshd@49-10.0.0.6:22-10.0.0.1:49626.service - OpenSSH per-connection server daemon (10.0.0.1:49626). Feb 13 20:54:45.131885 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 49626 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:45.133041 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:45.138291 systemd-logind[1417]: New session 50 of user core. Feb 13 20:54:45.145376 systemd[1]: Started session-50.scope - Session 50 of User core. Feb 13 20:54:45.249387 sshd[3528]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:45.252579 systemd[1]: sshd@49-10.0.0.6:22-10.0.0.1:49626.service: Deactivated successfully. Feb 13 20:54:45.254295 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 20:54:45.255034 systemd-logind[1417]: Session 50 logged out. Waiting for processes to exit. Feb 13 20:54:45.256001 systemd-logind[1417]: Removed session 50. Feb 13 20:54:45.563474 kubelet[2505]: E0213 20:54:45.563387 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:46.649909 kubelet[2505]: E0213 20:54:46.649873 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:48.563520 kubelet[2505]: E0213 20:54:48.563490 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:48.564233 kubelet[2505]: E0213 20:54:48.564161 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:54:50.263685 systemd[1]: Started sshd@50-10.0.0.6:22-10.0.0.1:49634.service - OpenSSH per-connection server daemon (10.0.0.1:49634). Feb 13 20:54:50.295801 sshd[3542]: Accepted publickey for core from 10.0.0.1 port 49634 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:50.296917 sshd[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:50.300839 systemd-logind[1417]: New session 51 of user core. Feb 13 20:54:50.319400 systemd[1]: Started session-51.scope - Session 51 of User core. Feb 13 20:54:50.422770 sshd[3542]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:50.426212 systemd[1]: sshd@50-10.0.0.6:22-10.0.0.1:49634.service: Deactivated successfully. Feb 13 20:54:50.428287 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 20:54:50.428990 systemd-logind[1417]: Session 51 logged out. Waiting for processes to exit. Feb 13 20:54:50.429807 systemd-logind[1417]: Removed session 51. Feb 13 20:54:51.650849 kubelet[2505]: E0213 20:54:51.650791 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:55.433682 systemd[1]: Started sshd@51-10.0.0.6:22-10.0.0.1:59120.service - OpenSSH per-connection server daemon (10.0.0.1:59120). Feb 13 20:54:55.466081 sshd[3556]: Accepted publickey for core from 10.0.0.1 port 59120 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:55.467403 sshd[3556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:55.471257 systemd-logind[1417]: New session 52 of user core. Feb 13 20:54:55.482337 systemd[1]: Started session-52.scope - Session 52 of User core. Feb 13 20:54:55.563339 kubelet[2505]: E0213 20:54:55.563311 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:55.587624 sshd[3556]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:55.590937 systemd[1]: sshd@51-10.0.0.6:22-10.0.0.1:59120.service: Deactivated successfully. Feb 13 20:54:55.592567 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 20:54:55.594119 systemd-logind[1417]: Session 52 logged out. Waiting for processes to exit. Feb 13 20:54:55.595688 systemd-logind[1417]: Removed session 52. Feb 13 20:54:56.651988 kubelet[2505]: E0213 20:54:56.651949 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:00.562817 kubelet[2505]: E0213 20:55:00.562778 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:00.563442 kubelet[2505]: E0213 20:55:00.563393 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:55:00.598161 systemd[1]: Started sshd@52-10.0.0.6:22-10.0.0.1:59124.service - OpenSSH per-connection server daemon (10.0.0.1:59124). Feb 13 20:55:00.629710 sshd[3572]: Accepted publickey for core from 10.0.0.1 port 59124 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:00.630921 sshd[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:00.634799 systemd-logind[1417]: New session 53 of user core. Feb 13 20:55:00.643301 systemd[1]: Started session-53.scope - Session 53 of User core. Feb 13 20:55:00.748490 sshd[3572]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:00.751622 systemd[1]: sshd@52-10.0.0.6:22-10.0.0.1:59124.service: Deactivated successfully. Feb 13 20:55:00.753231 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 20:55:00.754279 systemd-logind[1417]: Session 53 logged out. Waiting for processes to exit. Feb 13 20:55:00.755198 systemd-logind[1417]: Removed session 53. Feb 13 20:55:01.652840 kubelet[2505]: E0213 20:55:01.652794 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:05.758596 systemd[1]: Started sshd@53-10.0.0.6:22-10.0.0.1:46078.service - OpenSSH per-connection server daemon (10.0.0.1:46078). Feb 13 20:55:05.790080 sshd[3587]: Accepted publickey for core from 10.0.0.1 port 46078 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:05.791301 sshd[3587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:05.794488 systemd-logind[1417]: New session 54 of user core. Feb 13 20:55:05.804304 systemd[1]: Started session-54.scope - Session 54 of User core. Feb 13 20:55:05.907386 sshd[3587]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:05.910797 systemd[1]: sshd@53-10.0.0.6:22-10.0.0.1:46078.service: Deactivated successfully. Feb 13 20:55:05.912371 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 20:55:05.912998 systemd-logind[1417]: Session 54 logged out. Waiting for processes to exit. Feb 13 20:55:05.914095 systemd-logind[1417]: Removed session 54. Feb 13 20:55:06.654441 kubelet[2505]: E0213 20:55:06.654396 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:10.917571 systemd[1]: Started sshd@54-10.0.0.6:22-10.0.0.1:46088.service - OpenSSH per-connection server daemon (10.0.0.1:46088). Feb 13 20:55:10.949059 sshd[3601]: Accepted publickey for core from 10.0.0.1 port 46088 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:10.950219 sshd[3601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:10.953486 systemd-logind[1417]: New session 55 of user core. Feb 13 20:55:10.963388 systemd[1]: Started session-55.scope - Session 55 of User core. Feb 13 20:55:11.066997 sshd[3601]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:11.070285 systemd[1]: sshd@54-10.0.0.6:22-10.0.0.1:46088.service: Deactivated successfully. Feb 13 20:55:11.071833 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 20:55:11.073035 systemd-logind[1417]: Session 55 logged out. Waiting for processes to exit. Feb 13 20:55:11.073942 systemd-logind[1417]: Removed session 55. Feb 13 20:55:11.655557 kubelet[2505]: E0213 20:55:11.655519 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:14.562927 kubelet[2505]: E0213 20:55:14.562887 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:14.563737 kubelet[2505]: E0213 20:55:14.563524 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:55:15.563514 kubelet[2505]: E0213 20:55:15.563480 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:16.078728 systemd[1]: Started sshd@55-10.0.0.6:22-10.0.0.1:44824.service - OpenSSH per-connection server daemon (10.0.0.1:44824). Feb 13 20:55:16.109775 sshd[3615]: Accepted publickey for core from 10.0.0.1 port 44824 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:16.110978 sshd[3615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:16.116091 systemd-logind[1417]: New session 56 of user core. Feb 13 20:55:16.126315 systemd[1]: Started session-56.scope - Session 56 of User core. Feb 13 20:55:16.234932 sshd[3615]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:16.238512 systemd[1]: sshd@55-10.0.0.6:22-10.0.0.1:44824.service: Deactivated successfully. Feb 13 20:55:16.240147 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 20:55:16.240810 systemd-logind[1417]: Session 56 logged out. Waiting for processes to exit. Feb 13 20:55:16.241657 systemd-logind[1417]: Removed session 56. Feb 13 20:55:16.656502 kubelet[2505]: E0213 20:55:16.656464 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:21.245584 systemd[1]: Started sshd@56-10.0.0.6:22-10.0.0.1:44838.service - OpenSSH per-connection server daemon (10.0.0.1:44838). Feb 13 20:55:21.277257 sshd[3629]: Accepted publickey for core from 10.0.0.1 port 44838 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:21.278389 sshd[3629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:21.281838 systemd-logind[1417]: New session 57 of user core. Feb 13 20:55:21.291313 systemd[1]: Started session-57.scope - Session 57 of User core. Feb 13 20:55:21.397055 sshd[3629]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:21.400260 systemd[1]: sshd@56-10.0.0.6:22-10.0.0.1:44838.service: Deactivated successfully. Feb 13 20:55:21.402440 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 20:55:21.403082 systemd-logind[1417]: Session 57 logged out. Waiting for processes to exit. Feb 13 20:55:21.403829 systemd-logind[1417]: Removed session 57. Feb 13 20:55:21.657435 kubelet[2505]: E0213 20:55:21.657392 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:25.563943 kubelet[2505]: E0213 20:55:25.563797 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:25.564868 kubelet[2505]: E0213 20:55:25.564669 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:55:26.411660 systemd[1]: Started sshd@57-10.0.0.6:22-10.0.0.1:46322.service - OpenSSH per-connection server daemon (10.0.0.1:46322). Feb 13 20:55:26.443483 sshd[3644]: Accepted publickey for core from 10.0.0.1 port 46322 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:26.444677 sshd[3644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:26.448401 systemd-logind[1417]: New session 58 of user core. Feb 13 20:55:26.460305 systemd[1]: Started session-58.scope - Session 58 of User core. Feb 13 20:55:26.565797 sshd[3644]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:26.568829 systemd[1]: sshd@57-10.0.0.6:22-10.0.0.1:46322.service: Deactivated successfully. Feb 13 20:55:26.571982 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 20:55:26.572726 systemd-logind[1417]: Session 58 logged out. Waiting for processes to exit. Feb 13 20:55:26.573719 systemd-logind[1417]: Removed session 58. Feb 13 20:55:26.658150 kubelet[2505]: E0213 20:55:26.658110 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:31.577552 systemd[1]: Started sshd@58-10.0.0.6:22-10.0.0.1:46332.service - OpenSSH per-connection server daemon (10.0.0.1:46332). Feb 13 20:55:31.608888 sshd[3660]: Accepted publickey for core from 10.0.0.1 port 46332 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:31.610030 sshd[3660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:31.613846 systemd-logind[1417]: New session 59 of user core. Feb 13 20:55:31.624311 systemd[1]: Started session-59.scope - Session 59 of User core. Feb 13 20:55:31.658688 kubelet[2505]: E0213 20:55:31.658651 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:31.729577 sshd[3660]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:31.732671 systemd[1]: sshd@58-10.0.0.6:22-10.0.0.1:46332.service: Deactivated successfully. Feb 13 20:55:31.734272 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 20:55:31.734828 systemd-logind[1417]: Session 59 logged out. Waiting for processes to exit. Feb 13 20:55:31.735608 systemd-logind[1417]: Removed session 59. Feb 13 20:55:36.659641 kubelet[2505]: E0213 20:55:36.659604 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:36.740551 systemd[1]: Started sshd@59-10.0.0.6:22-10.0.0.1:47756.service - OpenSSH per-connection server daemon (10.0.0.1:47756). Feb 13 20:55:36.771895 sshd[3674]: Accepted publickey for core from 10.0.0.1 port 47756 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:36.773042 sshd[3674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:36.776483 systemd-logind[1417]: New session 60 of user core. Feb 13 20:55:36.784309 systemd[1]: Started session-60.scope - Session 60 of User core. Feb 13 20:55:36.891389 sshd[3674]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:36.894416 systemd[1]: sshd@59-10.0.0.6:22-10.0.0.1:47756.service: Deactivated successfully. Feb 13 20:55:36.896985 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 20:55:36.897843 systemd-logind[1417]: Session 60 logged out. Waiting for processes to exit. Feb 13 20:55:36.898672 systemd-logind[1417]: Removed session 60. Feb 13 20:55:39.564015 kubelet[2505]: E0213 20:55:39.563586 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:39.564861 kubelet[2505]: E0213 20:55:39.564657 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:55:41.660733 kubelet[2505]: E0213 20:55:41.660688 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:41.904363 systemd[1]: Started sshd@60-10.0.0.6:22-10.0.0.1:47758.service - OpenSSH per-connection server daemon (10.0.0.1:47758). Feb 13 20:55:41.935846 sshd[3691]: Accepted publickey for core from 10.0.0.1 port 47758 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:41.936951 sshd[3691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:41.940186 systemd-logind[1417]: New session 61 of user core. Feb 13 20:55:41.946367 systemd[1]: Started session-61.scope - Session 61 of User core. Feb 13 20:55:42.052100 sshd[3691]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:42.055847 systemd[1]: sshd@60-10.0.0.6:22-10.0.0.1:47758.service: Deactivated successfully. Feb 13 20:55:42.057911 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 20:55:42.058891 systemd-logind[1417]: Session 61 logged out. Waiting for processes to exit. Feb 13 20:55:42.059903 systemd-logind[1417]: Removed session 61. Feb 13 20:55:46.661495 kubelet[2505]: E0213 20:55:46.661452 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:47.066684 systemd[1]: Started sshd@61-10.0.0.6:22-10.0.0.1:36162.service - OpenSSH per-connection server daemon (10.0.0.1:36162). Feb 13 20:55:47.098067 sshd[3705]: Accepted publickey for core from 10.0.0.1 port 36162 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:47.099276 sshd[3705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:47.105994 systemd-logind[1417]: New session 62 of user core. Feb 13 20:55:47.115378 systemd[1]: Started session-62.scope - Session 62 of User core. Feb 13 20:55:47.224532 sshd[3705]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:47.228285 systemd[1]: sshd@61-10.0.0.6:22-10.0.0.1:36162.service: Deactivated successfully. Feb 13 20:55:47.230506 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 20:55:47.231125 systemd-logind[1417]: Session 62 logged out. Waiting for processes to exit. Feb 13 20:55:47.231919 systemd-logind[1417]: Removed session 62. Feb 13 20:55:50.563704 kubelet[2505]: E0213 20:55:50.563670 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:50.564400 kubelet[2505]: E0213 20:55:50.563738 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:50.564400 kubelet[2505]: E0213 20:55:50.564260 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:55:51.662941 kubelet[2505]: E0213 20:55:51.662891 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:52.235593 systemd[1]: Started sshd@62-10.0.0.6:22-10.0.0.1:36164.service - OpenSSH per-connection server daemon (10.0.0.1:36164). Feb 13 20:55:52.266844 sshd[3722]: Accepted publickey for core from 10.0.0.1 port 36164 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:52.268026 sshd[3722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:52.272082 systemd-logind[1417]: New session 63 of user core. Feb 13 20:55:52.278308 systemd[1]: Started session-63.scope - Session 63 of User core. Feb 13 20:55:52.384418 sshd[3722]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:52.387568 systemd[1]: sshd@62-10.0.0.6:22-10.0.0.1:36164.service: Deactivated successfully. Feb 13 20:55:52.390014 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 20:55:52.390679 systemd-logind[1417]: Session 63 logged out. Waiting for processes to exit. Feb 13 20:55:52.391572 systemd-logind[1417]: Removed session 63. Feb 13 20:55:54.563814 kubelet[2505]: E0213 20:55:54.563773 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:56.664357 kubelet[2505]: E0213 20:55:56.664310 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:57.398707 systemd[1]: Started sshd@63-10.0.0.6:22-10.0.0.1:49792.service - OpenSSH per-connection server daemon (10.0.0.1:49792). Feb 13 20:55:57.430035 sshd[3737]: Accepted publickey for core from 10.0.0.1 port 49792 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:57.431210 sshd[3737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:57.434738 systemd-logind[1417]: New session 64 of user core. Feb 13 20:55:57.445308 systemd[1]: Started session-64.scope - Session 64 of User core. Feb 13 20:55:57.551773 sshd[3737]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:57.554972 systemd[1]: sshd@63-10.0.0.6:22-10.0.0.1:49792.service: Deactivated successfully. Feb 13 20:55:57.556610 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 20:55:57.557324 systemd-logind[1417]: Session 64 logged out. Waiting for processes to exit. Feb 13 20:55:57.558219 systemd-logind[1417]: Removed session 64. Feb 13 20:56:01.665636 kubelet[2505]: E0213 20:56:01.665592 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:02.570620 systemd[1]: Started sshd@64-10.0.0.6:22-10.0.0.1:43692.service - OpenSSH per-connection server daemon (10.0.0.1:43692). Feb 13 20:56:02.601972 sshd[3754]: Accepted publickey for core from 10.0.0.1 port 43692 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:02.603125 sshd[3754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:02.607302 systemd-logind[1417]: New session 65 of user core. Feb 13 20:56:02.615413 systemd[1]: Started session-65.scope - Session 65 of User core. Feb 13 20:56:02.721694 sshd[3754]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:02.724824 systemd[1]: sshd@64-10.0.0.6:22-10.0.0.1:43692.service: Deactivated successfully. Feb 13 20:56:02.726517 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 20:56:02.727605 systemd-logind[1417]: Session 65 logged out. Waiting for processes to exit. Feb 13 20:56:02.728930 systemd-logind[1417]: Removed session 65. Feb 13 20:56:05.563257 kubelet[2505]: E0213 20:56:05.563057 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:05.563879 kubelet[2505]: E0213 20:56:05.563840 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:56:06.666364 kubelet[2505]: E0213 20:56:06.666324 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:07.732843 systemd[1]: Started sshd@65-10.0.0.6:22-10.0.0.1:43696.service - OpenSSH per-connection server daemon (10.0.0.1:43696). Feb 13 20:56:07.764149 sshd[3769]: Accepted publickey for core from 10.0.0.1 port 43696 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:07.765337 sshd[3769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:07.769365 systemd-logind[1417]: New session 66 of user core. Feb 13 20:56:07.786325 systemd[1]: Started session-66.scope - Session 66 of User core. Feb 13 20:56:07.893105 sshd[3769]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:07.896718 systemd[1]: sshd@65-10.0.0.6:22-10.0.0.1:43696.service: Deactivated successfully. Feb 13 20:56:07.898375 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 20:56:07.898925 systemd-logind[1417]: Session 66 logged out. Waiting for processes to exit. Feb 13 20:56:07.899699 systemd-logind[1417]: Removed session 66. Feb 13 20:56:11.667014 kubelet[2505]: E0213 20:56:11.666975 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:12.903515 systemd[1]: Started sshd@66-10.0.0.6:22-10.0.0.1:52284.service - OpenSSH per-connection server daemon (10.0.0.1:52284). Feb 13 20:56:12.934810 sshd[3784]: Accepted publickey for core from 10.0.0.1 port 52284 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:12.935934 sshd[3784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:12.939438 systemd-logind[1417]: New session 67 of user core. Feb 13 20:56:12.960310 systemd[1]: Started session-67.scope - Session 67 of User core. Feb 13 20:56:13.065625 sshd[3784]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:13.068723 systemd[1]: sshd@66-10.0.0.6:22-10.0.0.1:52284.service: Deactivated successfully. Feb 13 20:56:13.071299 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 20:56:13.072326 systemd-logind[1417]: Session 67 logged out. Waiting for processes to exit. Feb 13 20:56:13.073571 systemd-logind[1417]: Removed session 67. Feb 13 20:56:16.668008 kubelet[2505]: E0213 20:56:16.667960 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:18.076837 systemd[1]: Started sshd@67-10.0.0.6:22-10.0.0.1:52296.service - OpenSSH per-connection server daemon (10.0.0.1:52296). Feb 13 20:56:18.108423 sshd[3799]: Accepted publickey for core from 10.0.0.1 port 52296 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:18.109573 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:18.112914 systemd-logind[1417]: New session 68 of user core. Feb 13 20:56:18.119328 systemd[1]: Started session-68.scope - Session 68 of User core. Feb 13 20:56:18.223933 sshd[3799]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:18.227053 systemd[1]: sshd@67-10.0.0.6:22-10.0.0.1:52296.service: Deactivated successfully. Feb 13 20:56:18.228799 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 20:56:18.229432 systemd-logind[1417]: Session 68 logged out. Waiting for processes to exit. Feb 13 20:56:18.230205 systemd-logind[1417]: Removed session 68. Feb 13 20:56:19.563707 kubelet[2505]: E0213 20:56:19.563545 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:19.564254 kubelet[2505]: E0213 20:56:19.564224 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:56:21.669440 kubelet[2505]: E0213 20:56:21.669391 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:22.563482 kubelet[2505]: E0213 20:56:22.563395 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:23.234752 systemd[1]: Started sshd@68-10.0.0.6:22-10.0.0.1:55098.service - OpenSSH per-connection server daemon (10.0.0.1:55098). Feb 13 20:56:23.266604 sshd[3813]: Accepted publickey for core from 10.0.0.1 port 55098 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:23.267712 sshd[3813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:23.271098 systemd-logind[1417]: New session 69 of user core. Feb 13 20:56:23.278318 systemd[1]: Started session-69.scope - Session 69 of User core. Feb 13 20:56:23.385259 sshd[3813]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:23.388364 systemd[1]: sshd@68-10.0.0.6:22-10.0.0.1:55098.service: Deactivated successfully. Feb 13 20:56:23.390042 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 20:56:23.390654 systemd-logind[1417]: Session 69 logged out. Waiting for processes to exit. Feb 13 20:56:23.391447 systemd-logind[1417]: Removed session 69. Feb 13 20:56:26.671033 kubelet[2505]: E0213 20:56:26.670990 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:28.395775 systemd[1]: Started sshd@69-10.0.0.6:22-10.0.0.1:55114.service - OpenSSH per-connection server daemon (10.0.0.1:55114). Feb 13 20:56:28.427391 sshd[3830]: Accepted publickey for core from 10.0.0.1 port 55114 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:28.428580 sshd[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:28.432717 systemd-logind[1417]: New session 70 of user core. Feb 13 20:56:28.450329 systemd[1]: Started session-70.scope - Session 70 of User core. Feb 13 20:56:28.556224 sshd[3830]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:28.559475 systemd[1]: sshd@69-10.0.0.6:22-10.0.0.1:55114.service: Deactivated successfully. Feb 13 20:56:28.561254 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 20:56:28.561778 systemd-logind[1417]: Session 70 logged out. Waiting for processes to exit. Feb 13 20:56:28.562658 systemd-logind[1417]: Removed session 70. Feb 13 20:56:28.563750 kubelet[2505]: E0213 20:56:28.563726 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:31.564617 kubelet[2505]: E0213 20:56:31.564582 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:31.565429 kubelet[2505]: E0213 20:56:31.565363 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:56:31.672271 kubelet[2505]: E0213 20:56:31.672233 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:33.573228 systemd[1]: Started sshd@70-10.0.0.6:22-10.0.0.1:46086.service - OpenSSH per-connection server daemon (10.0.0.1:46086). Feb 13 20:56:33.604979 sshd[3844]: Accepted publickey for core from 10.0.0.1 port 46086 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:33.606115 sshd[3844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:33.609637 systemd-logind[1417]: New session 71 of user core. Feb 13 20:56:33.623324 systemd[1]: Started session-71.scope - Session 71 of User core. Feb 13 20:56:33.729690 sshd[3844]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:33.732823 systemd[1]: sshd@70-10.0.0.6:22-10.0.0.1:46086.service: Deactivated successfully. Feb 13 20:56:33.734591 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 20:56:33.736531 systemd-logind[1417]: Session 71 logged out. Waiting for processes to exit. Feb 13 20:56:33.737441 systemd-logind[1417]: Removed session 71. Feb 13 20:56:36.672986 kubelet[2505]: E0213 20:56:36.672935 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:38.740698 systemd[1]: Started sshd@71-10.0.0.6:22-10.0.0.1:46102.service - OpenSSH per-connection server daemon (10.0.0.1:46102). Feb 13 20:56:38.772477 sshd[3858]: Accepted publickey for core from 10.0.0.1 port 46102 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:38.773773 sshd[3858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:38.777583 systemd-logind[1417]: New session 72 of user core. Feb 13 20:56:38.794313 systemd[1]: Started session-72.scope - Session 72 of User core. Feb 13 20:56:38.900942 sshd[3858]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:38.904085 systemd[1]: sshd@71-10.0.0.6:22-10.0.0.1:46102.service: Deactivated successfully. Feb 13 20:56:38.906340 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 20:56:38.907431 systemd-logind[1417]: Session 72 logged out. Waiting for processes to exit. Feb 13 20:56:38.908810 systemd-logind[1417]: Removed session 72. Feb 13 20:56:41.674348 kubelet[2505]: E0213 20:56:41.674297 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:43.915381 systemd[1]: Started sshd@72-10.0.0.6:22-10.0.0.1:42228.service - OpenSSH per-connection server daemon (10.0.0.1:42228). Feb 13 20:56:43.946532 sshd[3875]: Accepted publickey for core from 10.0.0.1 port 42228 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:43.947769 sshd[3875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:43.951568 systemd-logind[1417]: New session 73 of user core. Feb 13 20:56:43.960292 systemd[1]: Started session-73.scope - Session 73 of User core. Feb 13 20:56:44.067009 sshd[3875]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:44.070199 systemd[1]: sshd@72-10.0.0.6:22-10.0.0.1:42228.service: Deactivated successfully. Feb 13 20:56:44.071816 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 20:56:44.072427 systemd-logind[1417]: Session 73 logged out. Waiting for processes to exit. Feb 13 20:56:44.073163 systemd-logind[1417]: Removed session 73. Feb 13 20:56:44.564059 kubelet[2505]: E0213 20:56:44.563828 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:44.564978 containerd[1432]: time="2025-02-13T20:56:44.564924067Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:56:45.678983 containerd[1432]: time="2025-02-13T20:56:45.678928506Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:56:45.679462 containerd[1432]: time="2025-02-13T20:56:45.679010187Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:56:45.679515 kubelet[2505]: E0213 20:56:45.679166 2505 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:56:45.679515 kubelet[2505]: E0213 20:56:45.679221 2505 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:56:45.679786 kubelet[2505]: E0213 20:56:45.679314 2505 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5pl52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-vgntp_kube-flannel(f7a2b434-ec01-4de2-9f31-4fa14d12ea17): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:56:45.679846 kubelet[2505]: E0213 20:56:45.679343 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:56:46.675973 kubelet[2505]: E0213 20:56:46.675936 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:49.082414 systemd[1]: Started sshd@73-10.0.0.6:22-10.0.0.1:42244.service - OpenSSH per-connection server daemon (10.0.0.1:42244). Feb 13 20:56:49.113683 sshd[3890]: Accepted publickey for core from 10.0.0.1 port 42244 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:49.114796 sshd[3890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:49.118312 systemd-logind[1417]: New session 74 of user core. Feb 13 20:56:49.129358 systemd[1]: Started session-74.scope - Session 74 of User core. Feb 13 20:56:49.235946 sshd[3890]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:49.239008 systemd[1]: sshd@73-10.0.0.6:22-10.0.0.1:42244.service: Deactivated successfully. Feb 13 20:56:49.241693 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 20:56:49.242375 systemd-logind[1417]: Session 74 logged out. Waiting for processes to exit. Feb 13 20:56:49.243166 systemd-logind[1417]: Removed session 74. Feb 13 20:56:51.677627 kubelet[2505]: E0213 20:56:51.677578 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:53.563852 kubelet[2505]: E0213 20:56:53.563786 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:54.248082 systemd[1]: Started sshd@74-10.0.0.6:22-10.0.0.1:36250.service - OpenSSH per-connection server daemon (10.0.0.1:36250). Feb 13 20:56:54.279559 sshd[3906]: Accepted publickey for core from 10.0.0.1 port 36250 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:54.280760 sshd[3906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:54.284289 systemd-logind[1417]: New session 75 of user core. Feb 13 20:56:54.291314 systemd[1]: Started session-75.scope - Session 75 of User core. Feb 13 20:56:54.396106 sshd[3906]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:54.399255 systemd[1]: sshd@74-10.0.0.6:22-10.0.0.1:36250.service: Deactivated successfully. Feb 13 20:56:54.400842 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 20:56:54.402304 systemd-logind[1417]: Session 75 logged out. Waiting for processes to exit. Feb 13 20:56:54.403760 systemd-logind[1417]: Removed session 75. Feb 13 20:56:56.679370 kubelet[2505]: E0213 20:56:56.679323 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:59.408013 systemd[1]: Started sshd@75-10.0.0.6:22-10.0.0.1:36260.service - OpenSSH per-connection server daemon (10.0.0.1:36260). Feb 13 20:56:59.439780 sshd[3923]: Accepted publickey for core from 10.0.0.1 port 36260 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:59.441068 sshd[3923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:59.445445 systemd-logind[1417]: New session 76 of user core. Feb 13 20:56:59.449320 systemd[1]: Started session-76.scope - Session 76 of User core. Feb 13 20:56:59.555056 sshd[3923]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:59.558472 systemd[1]: sshd@75-10.0.0.6:22-10.0.0.1:36260.service: Deactivated successfully. Feb 13 20:56:59.560657 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 20:56:59.561260 systemd-logind[1417]: Session 76 logged out. Waiting for processes to exit. Feb 13 20:56:59.562013 systemd-logind[1417]: Removed session 76. Feb 13 20:56:59.563110 kubelet[2505]: E0213 20:56:59.562987 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:59.564458 kubelet[2505]: E0213 20:56:59.564409 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:57:01.680585 kubelet[2505]: E0213 20:57:01.680533 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:04.569673 systemd[1]: Started sshd@76-10.0.0.6:22-10.0.0.1:49354.service - OpenSSH per-connection server daemon (10.0.0.1:49354). Feb 13 20:57:04.601082 sshd[3938]: Accepted publickey for core from 10.0.0.1 port 49354 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:04.602290 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:04.605475 systemd-logind[1417]: New session 77 of user core. Feb 13 20:57:04.619313 systemd[1]: Started session-77.scope - Session 77 of User core. Feb 13 20:57:04.723389 sshd[3938]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:04.726451 systemd[1]: sshd@76-10.0.0.6:22-10.0.0.1:49354.service: Deactivated successfully. Feb 13 20:57:04.728232 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 20:57:04.728971 systemd-logind[1417]: Session 77 logged out. Waiting for processes to exit. Feb 13 20:57:04.729992 systemd-logind[1417]: Removed session 77. Feb 13 20:57:06.682131 kubelet[2505]: E0213 20:57:06.682087 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:09.734609 systemd[1]: Started sshd@77-10.0.0.6:22-10.0.0.1:49366.service - OpenSSH per-connection server daemon (10.0.0.1:49366). Feb 13 20:57:09.766374 sshd[3952]: Accepted publickey for core from 10.0.0.1 port 49366 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:09.767534 sshd[3952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:09.771259 systemd-logind[1417]: New session 78 of user core. Feb 13 20:57:09.786311 systemd[1]: Started session-78.scope - Session 78 of User core. Feb 13 20:57:09.892889 sshd[3952]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:09.902684 systemd[1]: sshd@77-10.0.0.6:22-10.0.0.1:49366.service: Deactivated successfully. Feb 13 20:57:09.904052 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 20:57:09.905336 systemd-logind[1417]: Session 78 logged out. Waiting for processes to exit. Feb 13 20:57:09.915405 systemd[1]: Started sshd@78-10.0.0.6:22-10.0.0.1:49370.service - OpenSSH per-connection server daemon (10.0.0.1:49370). Feb 13 20:57:09.916138 systemd-logind[1417]: Removed session 78. Feb 13 20:57:09.943101 sshd[3966]: Accepted publickey for core from 10.0.0.1 port 49370 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:09.944249 sshd[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:09.947518 systemd-logind[1417]: New session 79 of user core. Feb 13 20:57:09.957292 systemd[1]: Started session-79.scope - Session 79 of User core. Feb 13 20:57:10.179224 sshd[3966]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:10.186629 systemd[1]: sshd@78-10.0.0.6:22-10.0.0.1:49370.service: Deactivated successfully. Feb 13 20:57:10.188819 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 20:57:10.189837 systemd-logind[1417]: Session 79 logged out. Waiting for processes to exit. Feb 13 20:57:10.197835 systemd[1]: Started sshd@79-10.0.0.6:22-10.0.0.1:49380.service - OpenSSH per-connection server daemon (10.0.0.1:49380). Feb 13 20:57:10.198748 systemd-logind[1417]: Removed session 79. Feb 13 20:57:10.225291 sshd[3978]: Accepted publickey for core from 10.0.0.1 port 49380 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:10.226430 sshd[3978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:10.229863 systemd-logind[1417]: New session 80 of user core. Feb 13 20:57:10.248344 systemd[1]: Started session-80.scope - Session 80 of User core. Feb 13 20:57:10.563902 kubelet[2505]: E0213 20:57:10.563860 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:11.344644 sshd[3978]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:11.353044 systemd[1]: sshd@79-10.0.0.6:22-10.0.0.1:49380.service: Deactivated successfully. Feb 13 20:57:11.356132 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 20:57:11.358777 systemd-logind[1417]: Session 80 logged out. Waiting for processes to exit. Feb 13 20:57:11.363476 systemd[1]: Started sshd@80-10.0.0.6:22-10.0.0.1:49396.service - OpenSSH per-connection server daemon (10.0.0.1:49396). Feb 13 20:57:11.364601 systemd-logind[1417]: Removed session 80. Feb 13 20:57:11.397893 sshd[4000]: Accepted publickey for core from 10.0.0.1 port 49396 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:11.399348 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:11.403470 systemd-logind[1417]: New session 81 of user core. Feb 13 20:57:11.413369 systemd[1]: Started session-81.scope - Session 81 of User core. Feb 13 20:57:11.620241 sshd[4000]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:11.632729 systemd[1]: sshd@80-10.0.0.6:22-10.0.0.1:49396.service: Deactivated successfully. Feb 13 20:57:11.634284 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 20:57:11.635894 systemd-logind[1417]: Session 81 logged out. Waiting for processes to exit. Feb 13 20:57:11.637048 systemd[1]: Started sshd@81-10.0.0.6:22-10.0.0.1:49402.service - OpenSSH per-connection server daemon (10.0.0.1:49402). Feb 13 20:57:11.638557 systemd-logind[1417]: Removed session 81. Feb 13 20:57:11.669537 sshd[4014]: Accepted publickey for core from 10.0.0.1 port 49402 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:11.670856 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:11.674600 systemd-logind[1417]: New session 82 of user core. Feb 13 20:57:11.683234 kubelet[2505]: E0213 20:57:11.683196 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:11.685334 systemd[1]: Started session-82.scope - Session 82 of User core. Feb 13 20:57:11.792659 sshd[4014]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:11.796074 systemd[1]: sshd@81-10.0.0.6:22-10.0.0.1:49402.service: Deactivated successfully. Feb 13 20:57:11.797728 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 20:57:11.799250 systemd-logind[1417]: Session 82 logged out. Waiting for processes to exit. Feb 13 20:57:11.800152 systemd-logind[1417]: Removed session 82. Feb 13 20:57:14.563168 kubelet[2505]: E0213 20:57:14.563058 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:14.564428 kubelet[2505]: E0213 20:57:14.564163 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:57:16.684429 kubelet[2505]: E0213 20:57:16.684383 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:16.803647 systemd[1]: Started sshd@82-10.0.0.6:22-10.0.0.1:38704.service - OpenSSH per-connection server daemon (10.0.0.1:38704). Feb 13 20:57:16.836052 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 38704 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:16.837586 sshd[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:16.841094 systemd-logind[1417]: New session 83 of user core. Feb 13 20:57:16.853425 systemd[1]: Started session-83.scope - Session 83 of User core. Feb 13 20:57:16.959078 sshd[4028]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:16.962528 systemd[1]: sshd@82-10.0.0.6:22-10.0.0.1:38704.service: Deactivated successfully. Feb 13 20:57:16.965253 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 20:57:16.966221 systemd-logind[1417]: Session 83 logged out. Waiting for processes to exit. Feb 13 20:57:16.967138 systemd-logind[1417]: Removed session 83. Feb 13 20:57:21.685086 kubelet[2505]: E0213 20:57:21.685050 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:21.969785 systemd[1]: Started sshd@83-10.0.0.6:22-10.0.0.1:38716.service - OpenSSH per-connection server daemon (10.0.0.1:38716). Feb 13 20:57:22.003060 sshd[4042]: Accepted publickey for core from 10.0.0.1 port 38716 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:22.004336 sshd[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:22.007727 systemd-logind[1417]: New session 84 of user core. Feb 13 20:57:22.019309 systemd[1]: Started session-84.scope - Session 84 of User core. Feb 13 20:57:22.123223 sshd[4042]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:22.125812 systemd[1]: sshd@83-10.0.0.6:22-10.0.0.1:38716.service: Deactivated successfully. Feb 13 20:57:22.127489 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 20:57:22.128781 systemd-logind[1417]: Session 84 logged out. Waiting for processes to exit. Feb 13 20:57:22.129792 systemd-logind[1417]: Removed session 84. Feb 13 20:57:26.686190 kubelet[2505]: E0213 20:57:26.686102 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:27.133362 systemd[1]: Started sshd@84-10.0.0.6:22-10.0.0.1:52052.service - OpenSSH per-connection server daemon (10.0.0.1:52052). Feb 13 20:57:27.164795 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 52052 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:27.166036 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:27.169381 systemd-logind[1417]: New session 85 of user core. Feb 13 20:57:27.175299 systemd[1]: Started session-85.scope - Session 85 of User core. Feb 13 20:57:27.276533 sshd[4056]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:27.279259 systemd[1]: sshd@84-10.0.0.6:22-10.0.0.1:52052.service: Deactivated successfully. Feb 13 20:57:27.281239 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 20:57:27.282657 systemd-logind[1417]: Session 85 logged out. Waiting for processes to exit. Feb 13 20:57:27.283731 systemd-logind[1417]: Removed session 85. Feb 13 20:57:28.562915 kubelet[2505]: E0213 20:57:28.562877 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:28.563672 kubelet[2505]: E0213 20:57:28.563469 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:57:31.563808 kubelet[2505]: E0213 20:57:31.563770 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:31.687041 kubelet[2505]: E0213 20:57:31.686994 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:32.294765 systemd[1]: Started sshd@85-10.0.0.6:22-10.0.0.1:52066.service - OpenSSH per-connection server daemon (10.0.0.1:52066). Feb 13 20:57:32.326770 sshd[4073]: Accepted publickey for core from 10.0.0.1 port 52066 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:32.327918 sshd[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:32.331099 systemd-logind[1417]: New session 86 of user core. Feb 13 20:57:32.341307 systemd[1]: Started session-86.scope - Session 86 of User core. Feb 13 20:57:32.444712 sshd[4073]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:32.448091 systemd[1]: sshd@85-10.0.0.6:22-10.0.0.1:52066.service: Deactivated successfully. Feb 13 20:57:32.449789 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 20:57:32.450366 systemd-logind[1417]: Session 86 logged out. Waiting for processes to exit. Feb 13 20:57:32.451125 systemd-logind[1417]: Removed session 86. Feb 13 20:57:36.687892 kubelet[2505]: E0213 20:57:36.687806 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:37.455636 systemd[1]: Started sshd@86-10.0.0.6:22-10.0.0.1:59102.service - OpenSSH per-connection server daemon (10.0.0.1:59102). Feb 13 20:57:37.487017 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 59102 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:37.488259 sshd[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:37.491762 systemd-logind[1417]: New session 87 of user core. Feb 13 20:57:37.503323 systemd[1]: Started session-87.scope - Session 87 of User core. Feb 13 20:57:37.606713 sshd[4093]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:37.609891 systemd[1]: sshd@86-10.0.0.6:22-10.0.0.1:59102.service: Deactivated successfully. Feb 13 20:57:37.611560 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 20:57:37.612242 systemd-logind[1417]: Session 87 logged out. Waiting for processes to exit. Feb 13 20:57:37.613300 systemd-logind[1417]: Removed session 87. Feb 13 20:57:41.562939 kubelet[2505]: E0213 20:57:41.562896 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:41.563564 kubelet[2505]: E0213 20:57:41.563527 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:57:41.688954 kubelet[2505]: E0213 20:57:41.688924 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:42.621819 systemd[1]: Started sshd@87-10.0.0.6:22-10.0.0.1:45094.service - OpenSSH per-connection server daemon (10.0.0.1:45094). Feb 13 20:57:42.653313 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 45094 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:42.654515 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:42.657932 systemd-logind[1417]: New session 88 of user core. Feb 13 20:57:42.669415 systemd[1]: Started session-88.scope - Session 88 of User core. Feb 13 20:57:42.773036 sshd[4109]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:42.776352 systemd[1]: sshd@87-10.0.0.6:22-10.0.0.1:45094.service: Deactivated successfully. Feb 13 20:57:42.778746 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 20:57:42.779440 systemd-logind[1417]: Session 88 logged out. Waiting for processes to exit. Feb 13 20:57:42.780168 systemd-logind[1417]: Removed session 88. Feb 13 20:57:46.689832 kubelet[2505]: E0213 20:57:46.689787 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:47.783642 systemd[1]: Started sshd@88-10.0.0.6:22-10.0.0.1:45100.service - OpenSSH per-connection server daemon (10.0.0.1:45100). Feb 13 20:57:47.815137 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 45100 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:47.816357 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:47.819959 systemd-logind[1417]: New session 89 of user core. Feb 13 20:57:47.828388 systemd[1]: Started session-89.scope - Session 89 of User core. Feb 13 20:57:47.931935 sshd[4124]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:47.935085 systemd[1]: sshd@88-10.0.0.6:22-10.0.0.1:45100.service: Deactivated successfully. Feb 13 20:57:47.936568 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 20:57:47.937249 systemd-logind[1417]: Session 89 logged out. Waiting for processes to exit. Feb 13 20:57:47.938111 systemd-logind[1417]: Removed session 89. Feb 13 20:57:48.563703 kubelet[2505]: E0213 20:57:48.563658 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:51.690617 kubelet[2505]: E0213 20:57:51.690561 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:52.942575 systemd[1]: Started sshd@89-10.0.0.6:22-10.0.0.1:58718.service - OpenSSH per-connection server daemon (10.0.0.1:58718). Feb 13 20:57:52.974029 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 58718 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:52.975153 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:52.978875 systemd-logind[1417]: New session 90 of user core. Feb 13 20:57:52.989372 systemd[1]: Started session-90.scope - Session 90 of User core. Feb 13 20:57:53.091376 sshd[4139]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:53.094228 systemd-logind[1417]: Session 90 logged out. Waiting for processes to exit. Feb 13 20:57:53.094561 systemd[1]: sshd@89-10.0.0.6:22-10.0.0.1:58718.service: Deactivated successfully. Feb 13 20:57:53.096221 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 20:57:53.096868 systemd-logind[1417]: Removed session 90. Feb 13 20:57:54.563535 kubelet[2505]: E0213 20:57:54.563429 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:54.564209 kubelet[2505]: E0213 20:57:54.564002 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:57:56.692081 kubelet[2505]: E0213 20:57:56.692041 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:57.563450 kubelet[2505]: E0213 20:57:57.563417 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:58.105610 systemd[1]: Started sshd@90-10.0.0.6:22-10.0.0.1:58728.service - OpenSSH per-connection server daemon (10.0.0.1:58728). Feb 13 20:57:58.137542 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 58728 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:58.138690 sshd[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:58.142597 systemd-logind[1417]: New session 91 of user core. Feb 13 20:57:58.154383 systemd[1]: Started session-91.scope - Session 91 of User core. Feb 13 20:57:58.259056 sshd[4155]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:58.262356 systemd[1]: sshd@90-10.0.0.6:22-10.0.0.1:58728.service: Deactivated successfully. Feb 13 20:57:58.264944 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 20:57:58.265705 systemd-logind[1417]: Session 91 logged out. Waiting for processes to exit. Feb 13 20:57:58.267268 systemd-logind[1417]: Removed session 91. Feb 13 20:58:01.693467 kubelet[2505]: E0213 20:58:01.693423 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:03.270027 systemd[1]: Started sshd@91-10.0.0.6:22-10.0.0.1:60488.service - OpenSSH per-connection server daemon (10.0.0.1:60488). Feb 13 20:58:03.301471 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 60488 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:03.302682 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:03.306024 systemd-logind[1417]: New session 92 of user core. Feb 13 20:58:03.316340 systemd[1]: Started session-92.scope - Session 92 of User core. Feb 13 20:58:03.419971 sshd[4169]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:03.423327 systemd[1]: sshd@91-10.0.0.6:22-10.0.0.1:60488.service: Deactivated successfully. Feb 13 20:58:03.425408 systemd[1]: session-92.scope: Deactivated successfully. Feb 13 20:58:03.426049 systemd-logind[1417]: Session 92 logged out. Waiting for processes to exit. Feb 13 20:58:03.426948 systemd-logind[1417]: Removed session 92. Feb 13 20:58:06.694658 kubelet[2505]: E0213 20:58:06.694601 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:08.439722 systemd[1]: Started sshd@92-10.0.0.6:22-10.0.0.1:60504.service - OpenSSH per-connection server daemon (10.0.0.1:60504). Feb 13 20:58:08.471386 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 60504 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:08.472528 sshd[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:08.476585 systemd-logind[1417]: New session 93 of user core. Feb 13 20:58:08.487366 systemd[1]: Started session-93.scope - Session 93 of User core. Feb 13 20:58:08.563082 kubelet[2505]: E0213 20:58:08.563052 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:08.563925 kubelet[2505]: E0213 20:58:08.563890 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:58:08.592553 sshd[4183]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:08.595798 systemd[1]: sshd@92-10.0.0.6:22-10.0.0.1:60504.service: Deactivated successfully. Feb 13 20:58:08.597527 systemd[1]: session-93.scope: Deactivated successfully. Feb 13 20:58:08.598920 systemd-logind[1417]: Session 93 logged out. Waiting for processes to exit. Feb 13 20:58:08.600512 systemd-logind[1417]: Removed session 93. Feb 13 20:58:11.696127 kubelet[2505]: E0213 20:58:11.696094 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:13.603045 systemd[1]: Started sshd@93-10.0.0.6:22-10.0.0.1:41776.service - OpenSSH per-connection server daemon (10.0.0.1:41776). Feb 13 20:58:13.634329 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 41776 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:13.635474 sshd[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:13.639491 systemd-logind[1417]: New session 94 of user core. Feb 13 20:58:13.651317 systemd[1]: Started session-94.scope - Session 94 of User core. Feb 13 20:58:13.754829 sshd[4198]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:13.757313 systemd[1]: sshd@93-10.0.0.6:22-10.0.0.1:41776.service: Deactivated successfully. Feb 13 20:58:13.759574 systemd[1]: session-94.scope: Deactivated successfully. Feb 13 20:58:13.760824 systemd-logind[1417]: Session 94 logged out. Waiting for processes to exit. Feb 13 20:58:13.761793 systemd-logind[1417]: Removed session 94. Feb 13 20:58:16.697049 kubelet[2505]: E0213 20:58:16.696993 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:17.564008 kubelet[2505]: E0213 20:58:17.563976 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:18.774686 systemd[1]: Started sshd@94-10.0.0.6:22-10.0.0.1:41782.service - OpenSSH per-connection server daemon (10.0.0.1:41782). Feb 13 20:58:18.806613 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 41782 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:18.807764 sshd[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:18.812124 systemd-logind[1417]: New session 95 of user core. Feb 13 20:58:18.819309 systemd[1]: Started session-95.scope - Session 95 of User core. Feb 13 20:58:18.923218 sshd[4214]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:18.926298 systemd[1]: sshd@94-10.0.0.6:22-10.0.0.1:41782.service: Deactivated successfully. Feb 13 20:58:18.927974 systemd[1]: session-95.scope: Deactivated successfully. Feb 13 20:58:18.928620 systemd-logind[1417]: Session 95 logged out. Waiting for processes to exit. Feb 13 20:58:18.929445 systemd-logind[1417]: Removed session 95. Feb 13 20:58:19.563910 kubelet[2505]: E0213 20:58:19.563870 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:19.564593 kubelet[2505]: E0213 20:58:19.564549 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:58:21.698027 kubelet[2505]: E0213 20:58:21.697989 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:23.933846 systemd[1]: Started sshd@95-10.0.0.6:22-10.0.0.1:55410.service - OpenSSH per-connection server daemon (10.0.0.1:55410). Feb 13 20:58:23.965187 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 55410 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:23.966460 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:23.969843 systemd-logind[1417]: New session 96 of user core. Feb 13 20:58:23.983394 systemd[1]: Started session-96.scope - Session 96 of User core. Feb 13 20:58:24.088258 sshd[4229]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:24.091369 systemd[1]: sshd@95-10.0.0.6:22-10.0.0.1:55410.service: Deactivated successfully. Feb 13 20:58:24.093031 systemd[1]: session-96.scope: Deactivated successfully. Feb 13 20:58:24.093824 systemd-logind[1417]: Session 96 logged out. Waiting for processes to exit. Feb 13 20:58:24.094728 systemd-logind[1417]: Removed session 96. Feb 13 20:58:26.699547 kubelet[2505]: E0213 20:58:26.699513 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:29.103718 systemd[1]: Started sshd@96-10.0.0.6:22-10.0.0.1:55422.service - OpenSSH per-connection server daemon (10.0.0.1:55422). Feb 13 20:58:29.135386 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 55422 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:29.136567 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:29.140271 systemd-logind[1417]: New session 97 of user core. Feb 13 20:58:29.157358 systemd[1]: Started session-97.scope - Session 97 of User core. Feb 13 20:58:29.261066 sshd[4245]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:29.264407 systemd[1]: sshd@96-10.0.0.6:22-10.0.0.1:55422.service: Deactivated successfully. Feb 13 20:58:29.267149 systemd[1]: session-97.scope: Deactivated successfully. Feb 13 20:58:29.268257 systemd-logind[1417]: Session 97 logged out. Waiting for processes to exit. Feb 13 20:58:29.269142 systemd-logind[1417]: Removed session 97. Feb 13 20:58:31.700724 kubelet[2505]: E0213 20:58:31.700664 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:33.564091 kubelet[2505]: E0213 20:58:33.563663 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:33.564551 kubelet[2505]: E0213 20:58:33.564507 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:58:34.271909 systemd[1]: Started sshd@97-10.0.0.6:22-10.0.0.1:48020.service - OpenSSH per-connection server daemon (10.0.0.1:48020). Feb 13 20:58:34.303636 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 48020 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:34.304785 sshd[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:34.308269 systemd-logind[1417]: New session 98 of user core. Feb 13 20:58:34.316308 systemd[1]: Started session-98.scope - Session 98 of User core. Feb 13 20:58:34.419861 sshd[4259]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:34.423631 systemd[1]: sshd@97-10.0.0.6:22-10.0.0.1:48020.service: Deactivated successfully. Feb 13 20:58:34.426672 systemd[1]: session-98.scope: Deactivated successfully. Feb 13 20:58:34.427236 systemd-logind[1417]: Session 98 logged out. Waiting for processes to exit. Feb 13 20:58:34.428007 systemd-logind[1417]: Removed session 98. Feb 13 20:58:36.701768 kubelet[2505]: E0213 20:58:36.701723 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:39.429777 systemd[1]: Started sshd@98-10.0.0.6:22-10.0.0.1:48032.service - OpenSSH per-connection server daemon (10.0.0.1:48032). Feb 13 20:58:39.461659 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 48032 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:39.462790 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:39.466076 systemd-logind[1417]: New session 99 of user core. Feb 13 20:58:39.481310 systemd[1]: Started session-99.scope - Session 99 of User core. Feb 13 20:58:39.584559 sshd[4273]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:39.587839 systemd[1]: sshd@98-10.0.0.6:22-10.0.0.1:48032.service: Deactivated successfully. Feb 13 20:58:39.589435 systemd[1]: session-99.scope: Deactivated successfully. Feb 13 20:58:39.590872 systemd-logind[1417]: Session 99 logged out. Waiting for processes to exit. Feb 13 20:58:39.591759 systemd-logind[1417]: Removed session 99. Feb 13 20:58:41.702540 kubelet[2505]: E0213 20:58:41.702486 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:44.563983 kubelet[2505]: E0213 20:58:44.563783 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:44.564545 kubelet[2505]: E0213 20:58:44.564135 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:44.564545 kubelet[2505]: E0213 20:58:44.564390 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:58:44.594745 systemd[1]: Started sshd@99-10.0.0.6:22-10.0.0.1:53472.service - OpenSSH per-connection server daemon (10.0.0.1:53472). Feb 13 20:58:44.626479 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 53472 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:44.627584 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:44.631014 systemd-logind[1417]: New session 100 of user core. Feb 13 20:58:44.642416 systemd[1]: Started session-100.scope - Session 100 of User core. Feb 13 20:58:44.747520 sshd[4289]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:44.750554 systemd[1]: sshd@99-10.0.0.6:22-10.0.0.1:53472.service: Deactivated successfully. Feb 13 20:58:44.752709 systemd[1]: session-100.scope: Deactivated successfully. Feb 13 20:58:44.753403 systemd-logind[1417]: Session 100 logged out. Waiting for processes to exit. Feb 13 20:58:44.754232 systemd-logind[1417]: Removed session 100. Feb 13 20:58:46.703405 kubelet[2505]: E0213 20:58:46.703359 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:49.757712 systemd[1]: Started sshd@100-10.0.0.6:22-10.0.0.1:53488.service - OpenSSH per-connection server daemon (10.0.0.1:53488). Feb 13 20:58:49.789434 sshd[4303]: Accepted publickey for core from 10.0.0.1 port 53488 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:49.790541 sshd[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:49.793844 systemd-logind[1417]: New session 101 of user core. Feb 13 20:58:49.804322 systemd[1]: Started session-101.scope - Session 101 of User core. Feb 13 20:58:49.908587 sshd[4303]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:49.911774 systemd[1]: sshd@100-10.0.0.6:22-10.0.0.1:53488.service: Deactivated successfully. Feb 13 20:58:49.913372 systemd[1]: session-101.scope: Deactivated successfully. Feb 13 20:58:49.913909 systemd-logind[1417]: Session 101 logged out. Waiting for processes to exit. Feb 13 20:58:49.914715 systemd-logind[1417]: Removed session 101. Feb 13 20:58:51.704352 kubelet[2505]: E0213 20:58:51.704311 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:54.918626 systemd[1]: Started sshd@101-10.0.0.6:22-10.0.0.1:56108.service - OpenSSH per-connection server daemon (10.0.0.1:56108). Feb 13 20:58:54.950138 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 56108 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:54.951291 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:54.954681 systemd-logind[1417]: New session 102 of user core. Feb 13 20:58:54.965388 systemd[1]: Started session-102.scope - Session 102 of User core. Feb 13 20:58:55.068396 sshd[4317]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:55.072041 systemd[1]: sshd@101-10.0.0.6:22-10.0.0.1:56108.service: Deactivated successfully. Feb 13 20:58:55.073586 systemd[1]: session-102.scope: Deactivated successfully. Feb 13 20:58:55.074841 systemd-logind[1417]: Session 102 logged out. Waiting for processes to exit. Feb 13 20:58:55.075627 systemd-logind[1417]: Removed session 102. Feb 13 20:58:56.705100 kubelet[2505]: E0213 20:58:56.705061 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:57.564068 kubelet[2505]: E0213 20:58:57.563844 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:57.564486 kubelet[2505]: E0213 20:58:57.564452 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:59:00.079216 systemd[1]: Started sshd@102-10.0.0.6:22-10.0.0.1:56112.service - OpenSSH per-connection server daemon (10.0.0.1:56112). Feb 13 20:59:00.110877 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 56112 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:00.111970 sshd[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:00.115370 systemd-logind[1417]: New session 103 of user core. Feb 13 20:59:00.123332 systemd[1]: Started session-103.scope - Session 103 of User core. Feb 13 20:59:00.230156 sshd[4335]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:00.233348 systemd[1]: sshd@102-10.0.0.6:22-10.0.0.1:56112.service: Deactivated successfully. Feb 13 20:59:00.235961 systemd[1]: session-103.scope: Deactivated successfully. Feb 13 20:59:00.236759 systemd-logind[1417]: Session 103 logged out. Waiting for processes to exit. Feb 13 20:59:00.237772 systemd-logind[1417]: Removed session 103. Feb 13 20:59:01.706470 kubelet[2505]: E0213 20:59:01.706405 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:05.243726 systemd[1]: Started sshd@103-10.0.0.6:22-10.0.0.1:56656.service - OpenSSH per-connection server daemon (10.0.0.1:56656). Feb 13 20:59:05.274909 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 56656 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:05.276069 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:05.280159 systemd-logind[1417]: New session 104 of user core. Feb 13 20:59:05.290390 systemd[1]: Started session-104.scope - Session 104 of User core. Feb 13 20:59:05.394345 sshd[4349]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:05.397479 systemd[1]: sshd@103-10.0.0.6:22-10.0.0.1:56656.service: Deactivated successfully. Feb 13 20:59:05.399259 systemd[1]: session-104.scope: Deactivated successfully. Feb 13 20:59:05.400648 systemd-logind[1417]: Session 104 logged out. Waiting for processes to exit. Feb 13 20:59:05.401604 systemd-logind[1417]: Removed session 104. Feb 13 20:59:06.707590 kubelet[2505]: E0213 20:59:06.707551 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:10.404906 systemd[1]: Started sshd@104-10.0.0.6:22-10.0.0.1:56666.service - OpenSSH per-connection server daemon (10.0.0.1:56666). Feb 13 20:59:10.436786 sshd[4363]: Accepted publickey for core from 10.0.0.1 port 56666 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:10.437942 sshd[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:10.441359 systemd-logind[1417]: New session 105 of user core. Feb 13 20:59:10.447300 systemd[1]: Started session-105.scope - Session 105 of User core. Feb 13 20:59:10.550570 sshd[4363]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:10.554263 systemd[1]: sshd@104-10.0.0.6:22-10.0.0.1:56666.service: Deactivated successfully. Feb 13 20:59:10.556701 systemd[1]: session-105.scope: Deactivated successfully. Feb 13 20:59:10.557358 systemd-logind[1417]: Session 105 logged out. Waiting for processes to exit. Feb 13 20:59:10.558141 systemd-logind[1417]: Removed session 105. Feb 13 20:59:11.563403 kubelet[2505]: E0213 20:59:11.563371 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:11.564507 kubelet[2505]: E0213 20:59:11.564416 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:59:11.708354 kubelet[2505]: E0213 20:59:11.708302 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:15.560620 systemd[1]: Started sshd@105-10.0.0.6:22-10.0.0.1:37776.service - OpenSSH per-connection server daemon (10.0.0.1:37776). Feb 13 20:59:15.592598 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 37776 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:15.593759 sshd[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:15.596983 systemd-logind[1417]: New session 106 of user core. Feb 13 20:59:15.608316 systemd[1]: Started session-106.scope - Session 106 of User core. Feb 13 20:59:15.711239 sshd[4377]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:15.714512 systemd[1]: sshd@105-10.0.0.6:22-10.0.0.1:37776.service: Deactivated successfully. Feb 13 20:59:15.716089 systemd[1]: session-106.scope: Deactivated successfully. Feb 13 20:59:15.716633 systemd-logind[1417]: Session 106 logged out. Waiting for processes to exit. Feb 13 20:59:15.717366 systemd-logind[1417]: Removed session 106. Feb 13 20:59:16.563157 kubelet[2505]: E0213 20:59:16.563121 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:16.709291 kubelet[2505]: E0213 20:59:16.709253 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:20.563810 kubelet[2505]: E0213 20:59:20.563754 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:20.721643 systemd[1]: Started sshd@106-10.0.0.6:22-10.0.0.1:37782.service - OpenSSH per-connection server daemon (10.0.0.1:37782). Feb 13 20:59:20.753815 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 37782 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:20.755055 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:20.758390 systemd-logind[1417]: New session 107 of user core. Feb 13 20:59:20.764298 systemd[1]: Started session-107.scope - Session 107 of User core. Feb 13 20:59:20.866778 sshd[4391]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:20.870218 systemd[1]: sshd@106-10.0.0.6:22-10.0.0.1:37782.service: Deactivated successfully. Feb 13 20:59:20.872715 systemd[1]: session-107.scope: Deactivated successfully. Feb 13 20:59:20.874653 systemd-logind[1417]: Session 107 logged out. Waiting for processes to exit. Feb 13 20:59:20.875418 systemd-logind[1417]: Removed session 107. Feb 13 20:59:21.710776 kubelet[2505]: E0213 20:59:21.710736 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:23.563256 kubelet[2505]: E0213 20:59:23.563211 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:23.563992 kubelet[2505]: E0213 20:59:23.563810 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:59:25.877618 systemd[1]: Started sshd@107-10.0.0.6:22-10.0.0.1:52630.service - OpenSSH per-connection server daemon (10.0.0.1:52630). Feb 13 20:59:25.909826 sshd[4405]: Accepted publickey for core from 10.0.0.1 port 52630 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:25.911035 sshd[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:25.915242 systemd-logind[1417]: New session 108 of user core. Feb 13 20:59:25.925318 systemd[1]: Started session-108.scope - Session 108 of User core. Feb 13 20:59:26.027490 sshd[4405]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:26.031106 systemd[1]: sshd@107-10.0.0.6:22-10.0.0.1:52630.service: Deactivated successfully. Feb 13 20:59:26.032663 systemd[1]: session-108.scope: Deactivated successfully. Feb 13 20:59:26.033203 systemd-logind[1417]: Session 108 logged out. Waiting for processes to exit. Feb 13 20:59:26.034284 systemd-logind[1417]: Removed session 108. Feb 13 20:59:26.711896 kubelet[2505]: E0213 20:59:26.711853 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:31.037551 systemd[1]: Started sshd@108-10.0.0.6:22-10.0.0.1:52632.service - OpenSSH per-connection server daemon (10.0.0.1:52632). Feb 13 20:59:31.069222 sshd[4422]: Accepted publickey for core from 10.0.0.1 port 52632 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:31.070388 sshd[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:31.074460 systemd-logind[1417]: New session 109 of user core. Feb 13 20:59:31.085312 systemd[1]: Started session-109.scope - Session 109 of User core. Feb 13 20:59:31.187528 sshd[4422]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:31.190595 systemd[1]: sshd@108-10.0.0.6:22-10.0.0.1:52632.service: Deactivated successfully. Feb 13 20:59:31.193565 systemd[1]: session-109.scope: Deactivated successfully. Feb 13 20:59:31.194134 systemd-logind[1417]: Session 109 logged out. Waiting for processes to exit. Feb 13 20:59:31.194899 systemd-logind[1417]: Removed session 109. Feb 13 20:59:31.713123 kubelet[2505]: E0213 20:59:31.713076 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:35.563613 kubelet[2505]: E0213 20:59:35.563568 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:35.564580 kubelet[2505]: E0213 20:59:35.564477 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:59:36.198661 systemd[1]: Started sshd@109-10.0.0.6:22-10.0.0.1:57116.service - OpenSSH per-connection server daemon (10.0.0.1:57116). Feb 13 20:59:36.230418 sshd[4436]: Accepted publickey for core from 10.0.0.1 port 57116 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:36.231659 sshd[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:36.234939 systemd-logind[1417]: New session 110 of user core. Feb 13 20:59:36.241307 systemd[1]: Started session-110.scope - Session 110 of User core. Feb 13 20:59:36.344446 sshd[4436]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:36.347549 systemd[1]: sshd@109-10.0.0.6:22-10.0.0.1:57116.service: Deactivated successfully. Feb 13 20:59:36.350410 systemd[1]: session-110.scope: Deactivated successfully. Feb 13 20:59:36.351150 systemd-logind[1417]: Session 110 logged out. Waiting for processes to exit. Feb 13 20:59:36.352247 systemd-logind[1417]: Removed session 110. Feb 13 20:59:36.714230 kubelet[2505]: E0213 20:59:36.714161 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:41.358850 systemd[1]: Started sshd@110-10.0.0.6:22-10.0.0.1:57122.service - OpenSSH per-connection server daemon (10.0.0.1:57122). Feb 13 20:59:41.390430 sshd[4451]: Accepted publickey for core from 10.0.0.1 port 57122 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:41.391617 sshd[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:41.395248 systemd-logind[1417]: New session 111 of user core. Feb 13 20:59:41.401331 systemd[1]: Started session-111.scope - Session 111 of User core. Feb 13 20:59:41.506348 sshd[4451]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:41.509454 systemd[1]: sshd@110-10.0.0.6:22-10.0.0.1:57122.service: Deactivated successfully. Feb 13 20:59:41.511056 systemd[1]: session-111.scope: Deactivated successfully. Feb 13 20:59:41.511615 systemd-logind[1417]: Session 111 logged out. Waiting for processes to exit. Feb 13 20:59:41.512473 systemd-logind[1417]: Removed session 111. Feb 13 20:59:41.715503 kubelet[2505]: E0213 20:59:41.715407 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:46.516609 systemd[1]: Started sshd@111-10.0.0.6:22-10.0.0.1:52478.service - OpenSSH per-connection server daemon (10.0.0.1:52478). Feb 13 20:59:46.547959 sshd[4468]: Accepted publickey for core from 10.0.0.1 port 52478 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:46.549154 sshd[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:46.552619 systemd-logind[1417]: New session 112 of user core. Feb 13 20:59:46.563339 kubelet[2505]: E0213 20:59:46.563293 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:46.568309 systemd[1]: Started session-112.scope - Session 112 of User core. Feb 13 20:59:46.672801 sshd[4468]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:46.675958 systemd[1]: sshd@111-10.0.0.6:22-10.0.0.1:52478.service: Deactivated successfully. Feb 13 20:59:46.678698 systemd[1]: session-112.scope: Deactivated successfully. Feb 13 20:59:46.679505 systemd-logind[1417]: Session 112 logged out. Waiting for processes to exit. Feb 13 20:59:46.680420 systemd-logind[1417]: Removed session 112. Feb 13 20:59:46.716718 kubelet[2505]: E0213 20:59:46.716678 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:47.563883 kubelet[2505]: E0213 20:59:47.563741 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:48.563062 kubelet[2505]: E0213 20:59:48.562953 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:48.563780 kubelet[2505]: E0213 20:59:48.563748 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 20:59:51.686706 systemd[1]: Started sshd@112-10.0.0.6:22-10.0.0.1:52490.service - OpenSSH per-connection server daemon (10.0.0.1:52490). Feb 13 20:59:51.717689 kubelet[2505]: E0213 20:59:51.717656 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:51.718866 sshd[4482]: Accepted publickey for core from 10.0.0.1 port 52490 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:51.719970 sshd[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:51.723205 systemd-logind[1417]: New session 113 of user core. Feb 13 20:59:51.733310 systemd[1]: Started session-113.scope - Session 113 of User core. Feb 13 20:59:51.840781 sshd[4482]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:51.844470 systemd[1]: sshd@112-10.0.0.6:22-10.0.0.1:52490.service: Deactivated successfully. Feb 13 20:59:51.846748 systemd[1]: session-113.scope: Deactivated successfully. Feb 13 20:59:51.847545 systemd-logind[1417]: Session 113 logged out. Waiting for processes to exit. Feb 13 20:59:51.848421 systemd-logind[1417]: Removed session 113. Feb 13 20:59:56.719351 kubelet[2505]: E0213 20:59:56.719304 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:56.850775 systemd[1]: Started sshd@113-10.0.0.6:22-10.0.0.1:35830.service - OpenSSH per-connection server daemon (10.0.0.1:35830). Feb 13 20:59:56.882341 sshd[4496]: Accepted publickey for core from 10.0.0.1 port 35830 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:56.883477 sshd[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:56.887405 systemd-logind[1417]: New session 114 of user core. Feb 13 20:59:56.904312 systemd[1]: Started session-114.scope - Session 114 of User core. Feb 13 20:59:57.007113 sshd[4496]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:57.010514 systemd[1]: sshd@113-10.0.0.6:22-10.0.0.1:35830.service: Deactivated successfully. Feb 13 20:59:57.012171 systemd[1]: session-114.scope: Deactivated successfully. Feb 13 20:59:57.012803 systemd-logind[1417]: Session 114 logged out. Waiting for processes to exit. Feb 13 20:59:57.013523 systemd-logind[1417]: Removed session 114. Feb 13 21:00:00.562882 kubelet[2505]: E0213 21:00:00.562837 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 21:00:00.563495 kubelet[2505]: E0213 21:00:00.563450 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17" Feb 13 21:00:01.720663 kubelet[2505]: E0213 21:00:01.720629 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 21:00:02.017738 systemd[1]: Started sshd@114-10.0.0.6:22-10.0.0.1:35832.service - OpenSSH per-connection server daemon (10.0.0.1:35832). Feb 13 21:00:02.049751 sshd[4512]: Accepted publickey for core from 10.0.0.1 port 35832 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 21:00:02.050899 sshd[4512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:00:02.055253 systemd-logind[1417]: New session 115 of user core. Feb 13 21:00:02.060312 systemd[1]: Started session-115.scope - Session 115 of User core. Feb 13 21:00:02.165324 sshd[4512]: pam_unix(sshd:session): session closed for user core Feb 13 21:00:02.168733 systemd[1]: sshd@114-10.0.0.6:22-10.0.0.1:35832.service: Deactivated successfully. Feb 13 21:00:02.170359 systemd[1]: session-115.scope: Deactivated successfully. Feb 13 21:00:02.170868 systemd-logind[1417]: Session 115 logged out. Waiting for processes to exit. Feb 13 21:00:02.171701 systemd-logind[1417]: Removed session 115. Feb 13 21:00:06.721939 kubelet[2505]: E0213 21:00:06.721887 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 21:00:07.175681 systemd[1]: Started sshd@115-10.0.0.6:22-10.0.0.1:42690.service - OpenSSH per-connection server daemon (10.0.0.1:42690). Feb 13 21:00:07.207199 sshd[4527]: Accepted publickey for core from 10.0.0.1 port 42690 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 21:00:07.208402 sshd[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:00:07.211667 systemd-logind[1417]: New session 116 of user core. Feb 13 21:00:07.227314 systemd[1]: Started session-116.scope - Session 116 of User core. Feb 13 21:00:07.331788 sshd[4527]: pam_unix(sshd:session): session closed for user core Feb 13 21:00:07.334828 systemd[1]: sshd@115-10.0.0.6:22-10.0.0.1:42690.service: Deactivated successfully. Feb 13 21:00:07.337139 systemd[1]: session-116.scope: Deactivated successfully. Feb 13 21:00:07.337730 systemd-logind[1417]: Session 116 logged out. Waiting for processes to exit. Feb 13 21:00:07.338543 systemd-logind[1417]: Removed session 116. Feb 13 21:00:11.723467 kubelet[2505]: E0213 21:00:11.723426 2505 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 21:00:12.344043 systemd[1]: Started sshd@116-10.0.0.6:22-10.0.0.1:42706.service - OpenSSH per-connection server daemon (10.0.0.1:42706). Feb 13 21:00:12.377116 sshd[4541]: Accepted publickey for core from 10.0.0.1 port 42706 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 21:00:12.378334 sshd[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:00:12.382467 systemd-logind[1417]: New session 117 of user core. Feb 13 21:00:12.394324 systemd[1]: Started session-117.scope - Session 117 of User core. Feb 13 21:00:12.499249 sshd[4541]: pam_unix(sshd:session): session closed for user core Feb 13 21:00:12.502418 systemd[1]: sshd@116-10.0.0.6:22-10.0.0.1:42706.service: Deactivated successfully. Feb 13 21:00:12.504215 systemd[1]: session-117.scope: Deactivated successfully. Feb 13 21:00:12.505429 systemd-logind[1417]: Session 117 logged out. Waiting for processes to exit. Feb 13 21:00:12.506230 systemd-logind[1417]: Removed session 117. Feb 13 21:00:13.563576 kubelet[2505]: E0213 21:00:13.563338 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 21:00:13.564082 kubelet[2505]: E0213 21:00:13.563938 2505 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-vgntp" podUID="f7a2b434-ec01-4de2-9f31-4fa14d12ea17"