Feb 13 20:38:56.887509 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:38:56.887530 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 20:38:56.887540 kernel: KASLR enabled Feb 13 20:38:56.887546 kernel: efi: EFI v2.7 by EDK II Feb 13 20:38:56.887552 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 20:38:56.887558 kernel: random: crng init done Feb 13 20:38:56.887565 kernel: ACPI: Early table checksum verification disabled Feb 13 20:38:56.887572 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 20:38:56.887578 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 20:38:56.887586 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:56.887593 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:56.887599 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:56.887605 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:56.887611 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:56.887619 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:56.887627 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:56.887634 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:56.887641 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:56.887647 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 20:38:56.887654 kernel: NUMA: Failed to initialise from firmware Feb 13 20:38:56.887660 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:38:56.887667 kernel: NUMA: NODE_DATA [mem 0xdc959800-0xdc95efff] Feb 13 20:38:56.887673 kernel: Zone ranges: Feb 13 20:38:56.887680 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:38:56.887686 kernel: DMA32 empty Feb 13 20:38:56.887694 kernel: Normal empty Feb 13 20:38:56.887701 kernel: Movable zone start for each node Feb 13 20:38:56.887708 kernel: Early memory node ranges Feb 13 20:38:56.887714 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 20:38:56.887721 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 20:38:56.887734 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 20:38:56.887741 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 20:38:56.887748 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 20:38:56.887754 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 20:38:56.887761 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 20:38:56.887767 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:38:56.887774 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 20:38:56.887782 kernel: psci: probing for conduit method from ACPI. Feb 13 20:38:56.887789 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:38:56.887796 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:38:56.887805 kernel: psci: Trusted OS migration not required Feb 13 20:38:56.887812 kernel: psci: SMC Calling Convention v1.1 Feb 13 20:38:56.887819 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 20:38:56.887827 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:38:56.887855 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:38:56.887863 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 20:38:56.887870 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:38:56.887877 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:38:56.887884 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:38:56.887891 kernel: CPU features: detected: Spectre-v4 Feb 13 20:38:56.887909 kernel: CPU features: detected: Spectre-BHB Feb 13 20:38:56.887917 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:38:56.887924 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:38:56.887933 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:38:56.887940 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:38:56.887947 kernel: alternatives: applying boot alternatives Feb 13 20:38:56.887955 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:38:56.887963 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:38:56.887970 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:38:56.887977 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:38:56.887984 kernel: Fallback order for Node 0: 0 Feb 13 20:38:56.887991 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 20:38:56.887998 kernel: Policy zone: DMA Feb 13 20:38:56.888005 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:38:56.888013 kernel: software IO TLB: area num 4. Feb 13 20:38:56.888020 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 20:38:56.888028 kernel: Memory: 2386536K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185752K reserved, 0K cma-reserved) Feb 13 20:38:56.888035 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 20:38:56.888042 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:38:56.888049 kernel: rcu: RCU event tracing is enabled. Feb 13 20:38:56.888056 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 20:38:56.888063 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:38:56.888070 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:38:56.888077 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:38:56.888084 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 20:38:56.888091 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:38:56.888099 kernel: GICv3: 256 SPIs implemented Feb 13 20:38:56.888106 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:38:56.888113 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:38:56.888120 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:38:56.888127 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 20:38:56.888135 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 20:38:56.888144 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 20:38:56.888152 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 20:38:56.888158 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 20:38:56.888165 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 20:38:56.888172 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:38:56.888181 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:38:56.888188 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:38:56.888195 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:38:56.888202 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:38:56.888209 kernel: arm-pv: using stolen time PV Feb 13 20:38:56.888217 kernel: Console: colour dummy device 80x25 Feb 13 20:38:56.888224 kernel: ACPI: Core revision 20230628 Feb 13 20:38:56.888231 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:38:56.888239 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:38:56.888246 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:38:56.888255 kernel: landlock: Up and running. Feb 13 20:38:56.888262 kernel: SELinux: Initializing. Feb 13 20:38:56.888269 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:38:56.888277 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:38:56.888284 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:38:56.888291 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:38:56.888299 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:38:56.888306 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:38:56.888313 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 20:38:56.888322 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 20:38:56.888329 kernel: Remapping and enabling EFI services. Feb 13 20:38:56.888336 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:38:56.888343 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:38:56.888350 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 20:38:56.888357 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 20:38:56.888364 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:38:56.888371 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:38:56.888379 kernel: Detected PIPT I-cache on CPU2 Feb 13 20:38:56.888386 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 20:38:56.888394 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 20:38:56.888402 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:38:56.888413 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 20:38:56.888422 kernel: Detected PIPT I-cache on CPU3 Feb 13 20:38:56.888430 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 20:38:56.888437 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 20:38:56.888445 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:38:56.888452 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 20:38:56.888460 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 20:38:56.888468 kernel: SMP: Total of 4 processors activated. Feb 13 20:38:56.888476 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:38:56.888483 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:38:56.888491 kernel: CPU features: detected: Common not Private translations Feb 13 20:38:56.888498 kernel: CPU features: detected: CRC32 instructions Feb 13 20:38:56.888506 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 20:38:56.888513 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:38:56.888520 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:38:56.888529 kernel: CPU features: detected: Privileged Access Never Feb 13 20:38:56.888537 kernel: CPU features: detected: RAS Extension Support Feb 13 20:38:56.888544 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 20:38:56.888552 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:38:56.888559 kernel: alternatives: applying system-wide alternatives Feb 13 20:38:56.888566 kernel: devtmpfs: initialized Feb 13 20:38:56.888574 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:38:56.888582 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 20:38:56.888589 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:38:56.888598 kernel: SMBIOS 3.0.0 present. Feb 13 20:38:56.888605 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 20:38:56.888613 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:38:56.888621 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:38:56.888628 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:38:56.888636 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:38:56.888644 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:38:56.888651 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:38:56.888659 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Feb 13 20:38:56.888669 kernel: cpuidle: using governor menu Feb 13 20:38:56.888679 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:38:56.888688 kernel: ASID allocator initialised with 32768 entries Feb 13 20:38:56.888695 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:38:56.888703 kernel: Serial: AMBA PL011 UART driver Feb 13 20:38:56.888710 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:38:56.888717 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:38:56.888729 kernel: Modules: 509040 pages in range for PLT usage Feb 13 20:38:56.888738 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:38:56.888748 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:38:56.888756 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:38:56.888764 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:38:56.888771 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:38:56.888779 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:38:56.888786 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:38:56.888794 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:38:56.888801 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:38:56.888808 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:38:56.888817 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:38:56.888825 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:38:56.888832 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:38:56.888840 kernel: ACPI: Interpreter enabled Feb 13 20:38:56.888847 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:38:56.888854 kernel: ACPI: MCFG table detected, 1 entries Feb 13 20:38:56.888862 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:38:56.888869 kernel: printk: console [ttyAMA0] enabled Feb 13 20:38:56.888877 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:38:56.889773 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:38:56.889863 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 20:38:56.889969 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 20:38:56.890050 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 20:38:56.890124 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 20:38:56.890135 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 20:38:56.890143 kernel: PCI host bridge to bus 0000:00 Feb 13 20:38:56.890232 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 20:38:56.890297 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 20:38:56.890364 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 20:38:56.890430 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:38:56.890523 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 20:38:56.890610 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:38:56.890691 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 20:38:56.890772 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 20:38:56.890848 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:38:56.890987 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:38:56.891062 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 20:38:56.891132 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 20:38:56.891195 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 20:38:56.891257 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 20:38:56.891325 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 20:38:56.891335 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 20:38:56.891343 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 20:38:56.891351 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 20:38:56.891358 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 20:38:56.891366 kernel: iommu: Default domain type: Translated Feb 13 20:38:56.891374 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:38:56.891381 kernel: efivars: Registered efivars operations Feb 13 20:38:56.891390 kernel: vgaarb: loaded Feb 13 20:38:56.891398 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:38:56.891406 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:38:56.891414 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:38:56.891421 kernel: pnp: PnP ACPI init Feb 13 20:38:56.891495 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 20:38:56.891507 kernel: pnp: PnP ACPI: found 1 devices Feb 13 20:38:56.891514 kernel: NET: Registered PF_INET protocol family Feb 13 20:38:56.891524 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:38:56.891532 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:38:56.891540 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:38:56.891547 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:38:56.891555 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:38:56.891562 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:38:56.891570 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:38:56.891578 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:38:56.891585 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:38:56.891595 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:38:56.891602 kernel: kvm [1]: HYP mode not available Feb 13 20:38:56.891610 kernel: Initialise system trusted keyrings Feb 13 20:38:56.891617 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:38:56.891624 kernel: Key type asymmetric registered Feb 13 20:38:56.891632 kernel: Asymmetric key parser 'x509' registered Feb 13 20:38:56.891640 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:38:56.891648 kernel: io scheduler mq-deadline registered Feb 13 20:38:56.891655 kernel: io scheduler kyber registered Feb 13 20:38:56.891664 kernel: io scheduler bfq registered Feb 13 20:38:56.891672 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 20:38:56.891679 kernel: ACPI: button: Power Button [PWRB] Feb 13 20:38:56.891687 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 20:38:56.891765 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 20:38:56.891776 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:38:56.891784 kernel: thunder_xcv, ver 1.0 Feb 13 20:38:56.891792 kernel: thunder_bgx, ver 1.0 Feb 13 20:38:56.891799 kernel: nicpf, ver 1.0 Feb 13 20:38:56.891809 kernel: nicvf, ver 1.0 Feb 13 20:38:56.891892 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:38:56.891979 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:38:56 UTC (1739479136) Feb 13 20:38:56.891990 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:38:56.891998 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 20:38:56.892006 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:38:56.892014 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:38:56.892021 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:38:56.892033 kernel: Segment Routing with IPv6 Feb 13 20:38:56.892040 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:38:56.892048 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:38:56.892055 kernel: Key type dns_resolver registered Feb 13 20:38:56.892063 kernel: registered taskstats version 1 Feb 13 20:38:56.892070 kernel: Loading compiled-in X.509 certificates Feb 13 20:38:56.892078 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 20:38:56.892085 kernel: Key type .fscrypt registered Feb 13 20:38:56.892092 kernel: Key type fscrypt-provisioning registered Feb 13 20:38:56.892103 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:38:56.892111 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:38:56.892119 kernel: ima: No architecture policies found Feb 13 20:38:56.892126 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:38:56.892134 kernel: clk: Disabling unused clocks Feb 13 20:38:56.892141 kernel: Freeing unused kernel memory: 39360K Feb 13 20:38:56.892148 kernel: Run /init as init process Feb 13 20:38:56.892156 kernel: with arguments: Feb 13 20:38:56.892163 kernel: /init Feb 13 20:38:56.892172 kernel: with environment: Feb 13 20:38:56.892179 kernel: HOME=/ Feb 13 20:38:56.892187 kernel: TERM=linux Feb 13 20:38:56.892194 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:38:56.892204 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:38:56.892213 systemd[1]: Detected virtualization kvm. Feb 13 20:38:56.892222 systemd[1]: Detected architecture arm64. Feb 13 20:38:56.892229 systemd[1]: Running in initrd. Feb 13 20:38:56.892239 systemd[1]: No hostname configured, using default hostname. Feb 13 20:38:56.892246 systemd[1]: Hostname set to . Feb 13 20:38:56.892255 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:38:56.892263 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:38:56.892271 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:38:56.892279 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:38:56.892288 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:38:56.892296 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:38:56.892306 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:38:56.892314 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:38:56.892324 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:38:56.892332 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:38:56.892340 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:38:56.892348 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:38:56.892358 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:38:56.892366 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:38:56.892374 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:38:56.892382 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:38:56.892391 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:38:56.892399 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:38:56.892407 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:38:56.892415 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:38:56.892424 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:38:56.892433 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:38:56.892441 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:38:56.892449 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:38:56.892458 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:38:56.892466 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:38:56.892474 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:38:56.892482 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:38:56.892490 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:38:56.892498 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:38:56.892508 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:38:56.892516 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:38:56.892524 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:38:56.892532 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:38:56.892541 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:38:56.892551 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:38:56.892559 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:38:56.892568 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:38:56.892576 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:38:56.892601 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 20:38:56.892622 systemd-journald[238]: Journal started Feb 13 20:38:56.892641 systemd-journald[238]: Runtime Journal (/run/log/journal/9e024ddba2ba4ef5bb14956ec632af76) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:38:56.867509 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 20:38:56.895450 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:38:56.895488 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:38:56.898944 kernel: Bridge firewalling registered Feb 13 20:38:56.898889 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 20:38:56.900064 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:38:56.902952 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:38:56.906053 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:38:56.908081 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:38:56.911518 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:38:56.912839 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:38:56.914053 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:38:56.923771 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:38:56.927994 dracut-cmdline[270]: dracut-dracut-053 Feb 13 20:38:56.935687 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:38:56.935080 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:38:56.969635 systemd-resolved[283]: Positive Trust Anchors: Feb 13 20:38:56.969652 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:38:56.969685 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:38:56.984368 systemd-resolved[283]: Defaulting to hostname 'linux'. Feb 13 20:38:56.985714 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:38:56.986652 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:38:57.036929 kernel: SCSI subsystem initialized Feb 13 20:38:57.042916 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:38:57.050922 kernel: iscsi: registered transport (tcp) Feb 13 20:38:57.067936 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:38:57.067977 kernel: QLogic iSCSI HBA Driver Feb 13 20:38:57.109522 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:38:57.120054 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:38:57.134926 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:38:57.134979 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:38:57.134999 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:38:57.180924 kernel: raid6: neonx8 gen() 15745 MB/s Feb 13 20:38:57.197911 kernel: raid6: neonx4 gen() 15648 MB/s Feb 13 20:38:57.214910 kernel: raid6: neonx2 gen() 13233 MB/s Feb 13 20:38:57.231911 kernel: raid6: neonx1 gen() 10454 MB/s Feb 13 20:38:57.248908 kernel: raid6: int64x8 gen() 6943 MB/s Feb 13 20:38:57.265909 kernel: raid6: int64x4 gen() 7340 MB/s Feb 13 20:38:57.282911 kernel: raid6: int64x2 gen() 6105 MB/s Feb 13 20:38:57.299908 kernel: raid6: int64x1 gen() 5040 MB/s Feb 13 20:38:57.299923 kernel: raid6: using algorithm neonx8 gen() 15745 MB/s Feb 13 20:38:57.316912 kernel: raid6: .... xor() 11912 MB/s, rmw enabled Feb 13 20:38:57.316927 kernel: raid6: using neon recovery algorithm Feb 13 20:38:57.321981 kernel: xor: measuring software checksum speed Feb 13 20:38:57.321997 kernel: 8regs : 19816 MB/sec Feb 13 20:38:57.323051 kernel: 32regs : 19664 MB/sec Feb 13 20:38:57.323064 kernel: arm64_neon : 27061 MB/sec Feb 13 20:38:57.323073 kernel: xor: using function: arm64_neon (27061 MB/sec) Feb 13 20:38:57.375942 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:38:57.386963 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:38:57.399045 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:38:57.409909 systemd-udevd[461]: Using default interface naming scheme 'v255'. Feb 13 20:38:57.413088 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:38:57.415444 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:38:57.430280 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Feb 13 20:38:57.456961 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:38:57.475022 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:38:57.514106 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:38:57.520107 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:38:57.531088 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:38:57.532367 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:38:57.533737 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:38:57.535566 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:38:57.546021 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:38:57.556507 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:38:57.560617 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 20:38:57.576400 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 20:38:57.576508 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:38:57.576520 kernel: GPT:9289727 != 19775487 Feb 13 20:38:57.576529 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:38:57.576538 kernel: GPT:9289727 != 19775487 Feb 13 20:38:57.576550 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:38:57.576560 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:38:57.565666 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:38:57.565780 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:38:57.572311 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:38:57.575647 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:38:57.575781 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:38:57.576649 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:38:57.590927 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (522) Feb 13 20:38:57.592939 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (521) Feb 13 20:38:57.591193 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:38:57.603250 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:38:57.608678 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:38:57.613296 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:38:57.620085 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:38:57.620948 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:38:57.626726 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:38:57.645044 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:38:57.647063 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:38:57.650750 disk-uuid[551]: Primary Header is updated. Feb 13 20:38:57.650750 disk-uuid[551]: Secondary Entries is updated. Feb 13 20:38:57.650750 disk-uuid[551]: Secondary Header is updated. Feb 13 20:38:57.655926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:38:57.669926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:38:57.670101 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:38:58.668630 disk-uuid[553]: The operation has completed successfully. Feb 13 20:38:58.669494 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:38:58.688862 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:38:58.688982 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:38:58.712041 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:38:58.714736 sh[574]: Success Feb 13 20:38:58.723923 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:38:58.750482 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:38:58.761031 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:38:58.762628 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:38:58.772352 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 20:38:58.772390 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:38:58.772401 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:38:58.774164 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:38:58.774180 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:38:58.777526 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:38:58.778622 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:38:58.779319 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:38:58.781816 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:38:58.790443 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:38:58.790486 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:38:58.791126 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:38:58.792919 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:38:58.802430 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:38:58.803492 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:38:58.808545 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:38:58.815070 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:38:58.874805 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:38:58.887022 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:38:58.907052 ignition[670]: Ignition 2.19.0 Feb 13 20:38:58.907065 ignition[670]: Stage: fetch-offline Feb 13 20:38:58.907104 ignition[670]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:38:58.907112 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:38:58.907280 ignition[670]: parsed url from cmdline: "" Feb 13 20:38:58.907286 ignition[670]: no config URL provided Feb 13 20:38:58.907290 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:38:58.907297 ignition[670]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:38:58.907318 ignition[670]: op(1): [started] loading QEMU firmware config module Feb 13 20:38:58.907322 ignition[670]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 20:38:58.913221 systemd-networkd[767]: lo: Link UP Feb 13 20:38:58.913231 systemd-networkd[767]: lo: Gained carrier Feb 13 20:38:58.913938 systemd-networkd[767]: Enumeration completed Feb 13 20:38:58.914157 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:38:58.915311 ignition[670]: op(1): [finished] loading QEMU firmware config module Feb 13 20:38:58.914422 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:38:58.914425 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:38:58.915286 systemd-networkd[767]: eth0: Link UP Feb 13 20:38:58.915289 systemd-networkd[767]: eth0: Gained carrier Feb 13 20:38:58.915295 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:38:58.916077 systemd[1]: Reached target network.target - Network. Feb 13 20:38:58.931948 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:38:58.942421 ignition[670]: parsing config with SHA512: 146430996612b40a5f9335ec5d39dbafee55a8bbccfe215a4db88e18ffad18ec8ed2b354bc4029af357c15e1c6dba5ddaa914614d633ac63a5f936456b86659a Feb 13 20:38:58.946292 unknown[670]: fetched base config from "system" Feb 13 20:38:58.946302 unknown[670]: fetched user config from "qemu" Feb 13 20:38:58.946720 ignition[670]: fetch-offline: fetch-offline passed Feb 13 20:38:58.946785 ignition[670]: Ignition finished successfully Feb 13 20:38:58.948854 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:38:58.951003 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:38:58.962157 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:38:58.972712 ignition[773]: Ignition 2.19.0 Feb 13 20:38:58.972736 ignition[773]: Stage: kargs Feb 13 20:38:58.972930 ignition[773]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:38:58.972940 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:38:58.973803 ignition[773]: kargs: kargs passed Feb 13 20:38:58.973849 ignition[773]: Ignition finished successfully Feb 13 20:38:58.976781 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:38:58.991039 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:38:59.000655 ignition[781]: Ignition 2.19.0 Feb 13 20:38:59.000667 ignition[781]: Stage: disks Feb 13 20:38:59.000852 ignition[781]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:38:59.000863 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:38:59.001750 ignition[781]: disks: disks passed Feb 13 20:38:59.003157 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:38:59.001799 ignition[781]: Ignition finished successfully Feb 13 20:38:59.004174 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:38:59.005126 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:38:59.006569 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:38:59.007729 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:38:59.009106 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:38:59.022046 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:38:59.032633 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:38:59.036360 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:38:59.047045 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:38:59.095923 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 20:38:59.096097 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:38:59.097206 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:38:59.107998 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:38:59.109608 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:38:59.110684 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:38:59.110745 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:38:59.118407 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Feb 13 20:38:59.118431 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:38:59.118448 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:38:59.118459 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:38:59.110768 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:38:59.117866 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:38:59.122157 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:38:59.120049 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:38:59.124926 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:38:59.188813 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:38:59.193365 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:38:59.197582 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:38:59.201181 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:38:59.300020 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:38:59.305007 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:38:59.306467 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:38:59.311977 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:38:59.327942 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:38:59.331402 systemd-resolved[283]: Detected conflict on linux IN A 10.0.0.8 Feb 13 20:38:59.331421 systemd-resolved[283]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Feb 13 20:38:59.333436 ignition[912]: INFO : Ignition 2.19.0 Feb 13 20:38:59.333436 ignition[912]: INFO : Stage: mount Feb 13 20:38:59.333436 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:38:59.333436 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:38:59.336217 ignition[912]: INFO : mount: mount passed Feb 13 20:38:59.336217 ignition[912]: INFO : Ignition finished successfully Feb 13 20:38:59.336182 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:38:59.342981 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:38:59.771955 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:38:59.780067 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:38:59.784922 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (925) Feb 13 20:38:59.784976 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:38:59.786430 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:38:59.786449 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:38:59.788912 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:38:59.789778 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:38:59.804522 ignition[942]: INFO : Ignition 2.19.0 Feb 13 20:38:59.804522 ignition[942]: INFO : Stage: files Feb 13 20:38:59.805688 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:38:59.805688 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:38:59.805688 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:38:59.808118 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:38:59.808118 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:38:59.810629 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:38:59.811625 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:38:59.811625 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:38:59.811098 unknown[942]: wrote ssh authorized keys file for user: core Feb 13 20:38:59.814437 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 20:38:59.814437 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 13 20:38:59.916795 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:39:00.082259 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 20:39:00.082259 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:39:00.084984 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:39:00.084984 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:39:00.084984 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:39:00.084984 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:39:00.084984 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:39:00.084984 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:39:00.092423 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:39:00.092423 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:39:00.092423 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:39:00.092423 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:39:00.092423 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:39:00.092423 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:39:00.092423 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 20:39:00.398152 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:39:00.635585 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:39:00.635585 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:39:00.638188 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:39:00.638188 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:39:00.638188 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:39:00.638188 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 20:39:00.638188 ignition[942]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:39:00.638188 ignition[942]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:39:00.638188 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 20:39:00.638188 ignition[942]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 20:39:00.674454 ignition[942]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:39:00.679983 ignition[942]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:39:00.681968 ignition[942]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 20:39:00.681968 ignition[942]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:39:00.681968 ignition[942]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:39:00.681968 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:39:00.681968 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:39:00.681968 ignition[942]: INFO : files: files passed Feb 13 20:39:00.681968 ignition[942]: INFO : Ignition finished successfully Feb 13 20:39:00.684935 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:39:00.696141 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:39:00.698795 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:39:00.699918 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:39:00.699994 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:39:00.705880 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 20:39:00.709177 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:39:00.709177 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:39:00.712118 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:39:00.711630 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:39:00.713431 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:39:00.727103 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:39:00.753183 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:39:00.753300 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:39:00.755148 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:39:00.756645 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:39:00.758189 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:39:00.759022 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:39:00.775090 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:39:00.792115 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:39:00.800238 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:39:00.801168 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:39:00.802839 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:39:00.804396 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:39:00.804518 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:39:00.806675 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:39:00.808389 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:39:00.809743 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:39:00.811170 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:39:00.812756 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:39:00.814445 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:39:00.815993 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:39:00.817634 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:39:00.819311 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:39:00.820738 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:39:00.822027 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:39:00.822151 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:39:00.824177 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:39:00.825764 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:39:00.827365 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:39:00.830982 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:39:00.831919 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:39:00.832048 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:39:00.834558 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:39:00.834677 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:39:00.836328 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:39:00.837620 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:39:00.842972 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:39:00.844269 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:39:00.846287 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:39:00.847815 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:39:00.847924 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:39:00.849424 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:39:00.849512 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:39:00.850983 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:39:00.851106 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:39:00.852842 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:39:00.852964 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:39:00.869100 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:39:00.870021 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:39:00.870173 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:39:00.872784 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:39:00.874639 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:39:00.874786 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:39:00.876589 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:39:00.880123 ignition[996]: INFO : Ignition 2.19.0 Feb 13 20:39:00.880123 ignition[996]: INFO : Stage: umount Feb 13 20:39:00.876701 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:39:00.882792 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:39:00.882792 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:39:00.882123 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:39:00.888850 ignition[996]: INFO : umount: umount passed Feb 13 20:39:00.888850 ignition[996]: INFO : Ignition finished successfully Feb 13 20:39:00.882212 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:39:00.884184 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:39:00.884274 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:39:00.886367 systemd[1]: Stopped target network.target - Network. Feb 13 20:39:00.887838 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:39:00.887918 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:39:00.889965 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:39:00.890013 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:39:00.891844 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:39:00.891888 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:39:00.893675 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:39:00.893743 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:39:00.896115 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:39:00.899996 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:39:00.902550 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:39:00.910932 systemd-networkd[767]: eth0: DHCPv6 lease lost Feb 13 20:39:00.912749 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:39:00.912862 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:39:00.913993 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:39:00.914025 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:39:00.926143 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:39:00.926820 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:39:00.926882 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:39:00.928464 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:39:00.933803 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:39:00.933921 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:39:00.937388 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:39:00.937462 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:39:00.939330 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:39:00.939380 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:39:00.940973 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:39:00.941015 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:39:00.943326 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:39:00.943455 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:39:00.945212 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:39:00.945293 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:39:00.947627 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:39:00.947706 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:39:00.949118 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:39:00.949153 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:39:00.950757 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:39:00.950802 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:39:00.953515 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:39:00.953555 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:39:00.956122 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:39:00.956165 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:39:00.974077 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:39:00.974985 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:39:00.975045 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:39:00.977013 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:39:00.977054 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:39:00.979166 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:39:00.979251 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:39:00.980918 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:39:00.980996 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:39:00.983463 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:39:00.984386 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:39:00.984438 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:39:00.986726 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:39:00.997123 systemd[1]: Switching root. Feb 13 20:39:01.017932 systemd-journald[238]: Journal stopped Feb 13 20:39:01.709794 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 20:39:01.709853 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:39:01.709865 kernel: SELinux: policy capability open_perms=1 Feb 13 20:39:01.709875 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:39:01.709884 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:39:01.709894 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:39:01.709936 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:39:01.709947 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:39:01.709957 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:39:01.709969 kernel: audit: type=1403 audit(1739479141.168:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:39:01.709981 systemd[1]: Successfully loaded SELinux policy in 31.821ms. Feb 13 20:39:01.710002 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.755ms. Feb 13 20:39:01.710014 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:39:01.710025 systemd[1]: Detected virtualization kvm. Feb 13 20:39:01.710035 systemd[1]: Detected architecture arm64. Feb 13 20:39:01.710045 systemd[1]: Detected first boot. Feb 13 20:39:01.710060 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:39:01.710071 zram_generator::config[1041]: No configuration found. Feb 13 20:39:01.710085 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:39:01.710096 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:39:01.710106 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:39:01.710117 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:39:01.710129 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:39:01.710139 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:39:01.710149 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:39:01.710160 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:39:01.710178 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:39:01.710189 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:39:01.710200 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:39:01.710210 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:39:01.710224 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:39:01.710234 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:39:01.710245 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:39:01.710257 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:39:01.710268 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:39:01.710280 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:39:01.710291 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 20:39:01.710301 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:39:01.710312 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:39:01.710323 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:39:01.710334 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:39:01.710344 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:39:01.710357 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:39:01.710369 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:39:01.710379 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:39:01.710390 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:39:01.710401 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:39:01.710412 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:39:01.710423 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:39:01.710434 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:39:01.710445 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:39:01.710459 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:39:01.710485 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:39:01.710497 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:39:01.710508 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:39:01.710519 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:39:01.710529 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:39:01.710540 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:39:01.710551 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:39:01.710563 systemd[1]: Reached target machines.target - Containers. Feb 13 20:39:01.710576 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:39:01.710588 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:39:01.710600 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:39:01.710612 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:39:01.710622 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:39:01.710633 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:39:01.710645 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:39:01.710656 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:39:01.710667 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:39:01.710687 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:39:01.710699 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:39:01.710710 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:39:01.710722 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:39:01.710732 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:39:01.710742 kernel: loop: module loaded Feb 13 20:39:01.710752 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:39:01.710762 kernel: fuse: init (API version 7.39) Feb 13 20:39:01.710772 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:39:01.710785 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:39:01.710797 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:39:01.710808 kernel: ACPI: bus type drm_connector registered Feb 13 20:39:01.710817 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:39:01.710828 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:39:01.710838 systemd[1]: Stopped verity-setup.service. Feb 13 20:39:01.710849 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:39:01.710876 systemd-journald[1102]: Collecting audit messages is disabled. Feb 13 20:39:01.710906 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:39:01.710920 systemd-journald[1102]: Journal started Feb 13 20:39:01.710943 systemd-journald[1102]: Runtime Journal (/run/log/journal/9e024ddba2ba4ef5bb14956ec632af76) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:39:01.544133 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:39:01.558807 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:39:01.559169 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:39:01.712922 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:39:01.713178 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:39:01.714072 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:39:01.715018 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:39:01.715892 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:39:01.717928 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:39:01.719032 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:39:01.719158 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:39:01.720295 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:39:01.720419 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:39:01.721760 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:39:01.722811 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:39:01.724535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:39:01.724672 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:39:01.726221 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:39:01.727544 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:39:01.727694 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:39:01.729021 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:39:01.729156 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:39:01.730502 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:39:01.731840 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:39:01.733432 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:39:01.744797 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:39:01.753032 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:39:01.755117 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:39:01.756205 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:39:01.756256 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:39:01.758201 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:39:01.760372 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:39:01.762445 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:39:01.763614 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:39:01.770053 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:39:01.772290 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:39:01.773459 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:39:01.774407 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:39:01.775538 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:39:01.779180 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:39:01.780537 systemd-journald[1102]: Time spent on flushing to /var/log/journal/9e024ddba2ba4ef5bb14956ec632af76 is 14.381ms for 854 entries. Feb 13 20:39:01.780537 systemd-journald[1102]: System Journal (/var/log/journal/9e024ddba2ba4ef5bb14956ec632af76) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:39:01.803842 systemd-journald[1102]: Received client request to flush runtime journal. Feb 13 20:39:01.803877 kernel: loop0: detected capacity change from 0 to 114328 Feb 13 20:39:01.784076 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:39:01.787642 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:39:01.792951 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:39:01.794103 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:39:01.795150 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:39:01.796282 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:39:01.803241 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:39:01.804490 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:39:01.806017 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:39:01.809542 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:39:01.813196 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:39:01.819313 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:39:01.825313 udevadm[1160]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:39:01.829078 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:39:01.848225 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:39:01.849952 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:39:01.855516 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:39:01.858915 kernel: loop1: detected capacity change from 0 to 114432 Feb 13 20:39:01.864114 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:39:01.881027 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Feb 13 20:39:01.881047 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Feb 13 20:39:01.885275 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:39:01.896920 kernel: loop2: detected capacity change from 0 to 201592 Feb 13 20:39:01.948935 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 20:39:01.953922 kernel: loop4: detected capacity change from 0 to 114432 Feb 13 20:39:01.959925 kernel: loop5: detected capacity change from 0 to 201592 Feb 13 20:39:01.968967 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 20:39:01.969396 (sd-merge)[1178]: Merged extensions into '/usr'. Feb 13 20:39:01.973414 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:39:01.973480 systemd[1]: Reloading... Feb 13 20:39:02.039957 zram_generator::config[1204]: No configuration found. Feb 13 20:39:02.135392 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:39:02.153118 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:39:02.190537 systemd[1]: Reloading finished in 216 ms. Feb 13 20:39:02.222001 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:39:02.223197 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:39:02.239167 systemd[1]: Starting ensure-sysext.service... Feb 13 20:39:02.241405 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:39:02.260401 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:39:02.260418 systemd[1]: Reloading... Feb 13 20:39:02.265869 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:39:02.266163 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:39:02.266819 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:39:02.267058 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Feb 13 20:39:02.267115 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Feb 13 20:39:02.269335 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:39:02.269349 systemd-tmpfiles[1239]: Skipping /boot Feb 13 20:39:02.276568 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:39:02.276587 systemd-tmpfiles[1239]: Skipping /boot Feb 13 20:39:02.304923 zram_generator::config[1267]: No configuration found. Feb 13 20:39:02.387181 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:39:02.424219 systemd[1]: Reloading finished in 163 ms. Feb 13 20:39:02.438114 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:39:02.439521 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:39:02.455332 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:39:02.457702 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:39:02.462088 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:39:02.464577 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:39:02.469009 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:39:02.473093 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:39:02.483941 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:39:02.487448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:39:02.497175 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:39:02.500248 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:39:02.502754 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:39:02.503620 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:39:02.506189 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:39:02.510534 systemd-udevd[1308]: Using default interface naming scheme 'v255'. Feb 13 20:39:02.512100 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:39:02.513649 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:39:02.513783 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:39:02.515302 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:39:02.515423 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:39:02.516726 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:39:02.516878 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:39:02.522942 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:39:02.535229 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:39:02.537374 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:39:02.539924 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:39:02.544149 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:39:02.545133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:39:02.547161 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:39:02.550582 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:39:02.552533 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:39:02.552745 augenrules[1334]: No rules Feb 13 20:39:02.556931 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:39:02.559380 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:39:02.559511 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:39:02.560678 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:39:02.560806 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:39:02.568129 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:39:02.573952 systemd[1]: Finished ensure-sysext.service. Feb 13 20:39:02.575156 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:39:02.575965 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:39:02.589921 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1354) Feb 13 20:39:02.590422 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 20:39:02.602942 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:39:02.603752 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:39:02.609578 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:39:02.610754 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:39:02.615864 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:39:02.617947 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:39:02.626975 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:39:02.630787 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:39:02.700293 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:39:02.700565 systemd-resolved[1307]: Positive Trust Anchors: Feb 13 20:39:02.700576 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:39:02.700610 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:39:02.702765 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:39:02.712138 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:39:02.713442 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:39:02.714996 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:39:02.716024 systemd-resolved[1307]: Defaulting to hostname 'linux'. Feb 13 20:39:02.717948 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:39:02.718509 systemd-networkd[1373]: lo: Link UP Feb 13 20:39:02.718517 systemd-networkd[1373]: lo: Gained carrier Feb 13 20:39:02.719310 systemd-networkd[1373]: Enumeration completed Feb 13 20:39:02.719891 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:39:02.719907 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:39:02.722108 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:39:02.723049 systemd[1]: Reached target network.target - Network. Feb 13 20:39:02.723717 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:39:02.723796 systemd-networkd[1373]: eth0: Link UP Feb 13 20:39:02.723801 systemd-networkd[1373]: eth0: Gained carrier Feb 13 20:39:02.723816 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:39:02.727694 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:39:02.730470 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:39:02.734463 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:39:02.739504 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:39:02.751078 lvm[1393]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:39:02.752049 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:39:02.752560 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Feb 13 20:39:02.753309 systemd-timesyncd[1377]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 20:39:02.753362 systemd-timesyncd[1377]: Initial clock synchronization to Thu 2025-02-13 20:39:03.063077 UTC. Feb 13 20:39:02.765920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:39:02.784466 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:39:02.785708 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:39:02.786588 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:39:02.787660 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:39:02.788622 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:39:02.789785 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:39:02.790735 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:39:02.791723 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:39:02.792733 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:39:02.792767 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:39:02.793424 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:39:02.795375 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:39:02.797817 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:39:02.805986 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:39:02.808083 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:39:02.809561 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:39:02.810530 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:39:02.811380 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:39:02.812155 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:39:02.812203 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:39:02.813224 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:39:02.815194 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:39:02.818011 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:39:02.819069 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:39:02.822122 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:39:02.823075 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:39:02.825098 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:39:02.827338 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:39:02.835198 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:39:02.838073 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:39:02.857419 jq[1404]: false Feb 13 20:39:02.868850 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:39:02.871649 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:39:02.872575 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:39:02.875120 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:39:02.878173 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:39:02.879847 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:39:02.892879 extend-filesystems[1405]: Found loop3 Feb 13 20:39:02.892879 extend-filesystems[1405]: Found loop4 Feb 13 20:39:02.892879 extend-filesystems[1405]: Found loop5 Feb 13 20:39:02.892879 extend-filesystems[1405]: Found vda Feb 13 20:39:02.892879 extend-filesystems[1405]: Found vda1 Feb 13 20:39:02.892879 extend-filesystems[1405]: Found vda2 Feb 13 20:39:02.892879 extend-filesystems[1405]: Found vda3 Feb 13 20:39:02.892879 extend-filesystems[1405]: Found usr Feb 13 20:39:02.892879 extend-filesystems[1405]: Found vda4 Feb 13 20:39:02.892879 extend-filesystems[1405]: Found vda6 Feb 13 20:39:02.892879 extend-filesystems[1405]: Found vda7 Feb 13 20:39:02.892879 extend-filesystems[1405]: Found vda9 Feb 13 20:39:02.892879 extend-filesystems[1405]: Checking size of /dev/vda9 Feb 13 20:39:02.893598 dbus-daemon[1403]: [system] SELinux support is enabled Feb 13 20:39:02.895388 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:39:02.900971 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:39:02.929207 jq[1421]: true Feb 13 20:39:02.901155 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:39:02.901409 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:39:02.901543 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:39:02.905204 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:39:02.905345 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:39:02.916609 (ntainerd)[1426]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:39:02.921427 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:39:02.921469 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:39:02.926088 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:39:02.926107 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:39:02.937222 jq[1433]: true Feb 13 20:39:02.949252 extend-filesystems[1405]: Resized partition /dev/vda9 Feb 13 20:39:02.950785 tar[1425]: linux-arm64/LICENSE Feb 13 20:39:02.950785 tar[1425]: linux-arm64/helm Feb 13 20:39:02.957004 extend-filesystems[1441]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:39:02.961921 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1348) Feb 13 20:39:02.972355 systemd-logind[1417]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 20:39:02.973070 systemd-logind[1417]: New seat seat0. Feb 13 20:39:02.983835 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:39:02.987856 update_engine[1420]: I20250213 20:39:02.987004 1420 main.cc:92] Flatcar Update Engine starting Feb 13 20:39:02.990945 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 20:39:03.009876 update_engine[1420]: I20250213 20:39:03.009811 1420 update_check_scheduler.cc:74] Next update check in 11m11s Feb 13 20:39:03.010376 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:39:03.018283 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:39:03.035170 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 20:39:03.075956 extend-filesystems[1441]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:39:03.075956 extend-filesystems[1441]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:39:03.075956 extend-filesystems[1441]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 20:39:03.083168 extend-filesystems[1405]: Resized filesystem in /dev/vda9 Feb 13 20:39:03.081567 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:39:03.083147 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:39:03.086801 bash[1457]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:39:03.087784 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:39:03.090694 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:39:03.098921 locksmithd[1456]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:39:03.240086 containerd[1426]: time="2025-02-13T20:39:03.239932491Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:39:03.272522 containerd[1426]: time="2025-02-13T20:39:03.272464170Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.275258 containerd[1426]: time="2025-02-13T20:39:03.274051766Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:39:03.275258 containerd[1426]: time="2025-02-13T20:39:03.274088330Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:39:03.275258 containerd[1426]: time="2025-02-13T20:39:03.274118287Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:39:03.275258 containerd[1426]: time="2025-02-13T20:39:03.274304884Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:39:03.275258 containerd[1426]: time="2025-02-13T20:39:03.274325493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.275258 containerd[1426]: time="2025-02-13T20:39:03.274381958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:39:03.275258 containerd[1426]: time="2025-02-13T20:39:03.274395212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.275258 containerd[1426]: time="2025-02-13T20:39:03.274590826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:39:03.275258 containerd[1426]: time="2025-02-13T20:39:03.274616296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.275258 containerd[1426]: time="2025-02-13T20:39:03.274630423Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:39:03.275258 containerd[1426]: time="2025-02-13T20:39:03.274640436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.275521 containerd[1426]: time="2025-02-13T20:39:03.274721789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.275521 containerd[1426]: time="2025-02-13T20:39:03.274926295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.275521 containerd[1426]: time="2025-02-13T20:39:03.275050610Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:39:03.275521 containerd[1426]: time="2025-02-13T20:39:03.275065859Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:39:03.275521 containerd[1426]: time="2025-02-13T20:39:03.275153112Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:39:03.275521 containerd[1426]: time="2025-02-13T20:39:03.275193124Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:39:03.279521 containerd[1426]: time="2025-02-13T20:39:03.279490603Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:39:03.279669 containerd[1426]: time="2025-02-13T20:39:03.279653600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:39:03.279735 containerd[1426]: time="2025-02-13T20:39:03.279721658Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:39:03.279797 containerd[1426]: time="2025-02-13T20:39:03.279785021Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:39:03.279853 containerd[1426]: time="2025-02-13T20:39:03.279841320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:39:03.280105 containerd[1426]: time="2025-02-13T20:39:03.280082472Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:39:03.280473 containerd[1426]: time="2025-02-13T20:39:03.280453132Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:39:03.280666 containerd[1426]: time="2025-02-13T20:39:03.280645464Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:39:03.280732 containerd[1426]: time="2025-02-13T20:39:03.280719089Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:39:03.280786 containerd[1426]: time="2025-02-13T20:39:03.280773602Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:39:03.280845 containerd[1426]: time="2025-02-13T20:39:03.280831563Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.280940 containerd[1426]: time="2025-02-13T20:39:03.280904814Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.281001 containerd[1426]: time="2025-02-13T20:39:03.280986541Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.281067 containerd[1426]: time="2025-02-13T20:39:03.281054017Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.281132 containerd[1426]: time="2025-02-13T20:39:03.281107865Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.281184 containerd[1426]: time="2025-02-13T20:39:03.281172516Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.281255 containerd[1426]: time="2025-02-13T20:39:03.281242402Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.281324 containerd[1426]: time="2025-02-13T20:39:03.281311332Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.281392 containerd[1426]: time="2025-02-13T20:39:03.281379015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.281445 containerd[1426]: time="2025-02-13T20:39:03.281433570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.281506 containerd[1426]: time="2025-02-13T20:39:03.281492694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.281565 containerd[1426]: time="2025-02-13T20:39:03.281552733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.281617 containerd[1426]: time="2025-02-13T20:39:03.281605500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.281673 containerd[1426]: time="2025-02-13T20:39:03.281660553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.281726 containerd[1426]: time="2025-02-13T20:39:03.281713985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.281781 containerd[1426]: time="2025-02-13T20:39:03.281769163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.281835 containerd[1426]: time="2025-02-13T20:39:03.281823135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.281906 containerd[1426]: time="2025-02-13T20:39:03.281892564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.281982 containerd[1426]: time="2025-02-13T20:39:03.281967810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.282040 containerd[1426]: time="2025-02-13T20:39:03.282026810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.282113 containerd[1426]: time="2025-02-13T20:39:03.282098066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.282195 containerd[1426]: time="2025-02-13T20:39:03.282180459Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:39:03.282269 containerd[1426]: time="2025-02-13T20:39:03.282254915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.282342 containerd[1426]: time="2025-02-13T20:39:03.282327917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.282394 containerd[1426]: time="2025-02-13T20:39:03.282382346Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:39:03.282563 containerd[1426]: time="2025-02-13T20:39:03.282549166Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:39:03.282811 containerd[1426]: time="2025-02-13T20:39:03.282791731Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:39:03.282871 containerd[1426]: time="2025-02-13T20:39:03.282857586Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:39:03.282941 containerd[1426]: time="2025-02-13T20:39:03.282924896Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:39:03.283003 containerd[1426]: time="2025-02-13T20:39:03.282990419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.283084 containerd[1426]: time="2025-02-13T20:39:03.283068490Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:39:03.283146 containerd[1426]: time="2025-02-13T20:39:03.283134471Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:39:03.283214 containerd[1426]: time="2025-02-13T20:39:03.283200285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.283859 containerd[1426]: time="2025-02-13T20:39:03.283788289Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:39:03.284056 containerd[1426]: time="2025-02-13T20:39:03.284037876Z" level=info msg="Connect containerd service" Feb 13 20:39:03.284165 containerd[1426]: time="2025-02-13T20:39:03.284148313Z" level=info msg="using legacy CRI server" Feb 13 20:39:03.284217 containerd[1426]: time="2025-02-13T20:39:03.284203865Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:39:03.284390 containerd[1426]: time="2025-02-13T20:39:03.284369189Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:39:03.285402 containerd[1426]: time="2025-02-13T20:39:03.285372146Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:39:03.285861 containerd[1426]: time="2025-02-13T20:39:03.285826902Z" level=info msg="Start subscribing containerd event" Feb 13 20:39:03.286692 containerd[1426]: time="2025-02-13T20:39:03.286672596Z" level=info msg="Start recovering state" Feb 13 20:39:03.287484 containerd[1426]: time="2025-02-13T20:39:03.286829735Z" level=info msg="Start event monitor" Feb 13 20:39:03.287484 containerd[1426]: time="2025-02-13T20:39:03.286849429Z" level=info msg="Start snapshots syncer" Feb 13 20:39:03.287484 containerd[1426]: time="2025-02-13T20:39:03.286859692Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:39:03.287484 containerd[1426]: time="2025-02-13T20:39:03.286868874Z" level=info msg="Start streaming server" Feb 13 20:39:03.287484 containerd[1426]: time="2025-02-13T20:39:03.286623277Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:39:03.287484 containerd[1426]: time="2025-02-13T20:39:03.287196074Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:39:03.287484 containerd[1426]: time="2025-02-13T20:39:03.287257276Z" level=info msg="containerd successfully booted in 0.048890s" Feb 13 20:39:03.287378 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:39:03.438296 tar[1425]: linux-arm64/README.md Feb 13 20:39:03.449542 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:39:04.029570 sshd_keygen[1419]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:39:04.049102 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:39:04.061164 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:39:04.066802 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:39:04.067029 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:39:04.069512 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:39:04.081337 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:39:04.084121 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:39:04.085975 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 20:39:04.086973 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:39:04.207776 systemd-networkd[1373]: eth0: Gained IPv6LL Feb 13 20:39:04.210356 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:39:04.211898 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:39:04.221253 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 20:39:04.223304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:04.225065 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:39:04.239621 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 20:39:04.239828 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 20:39:04.241222 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:39:04.247498 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:39:04.771212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:04.772527 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:39:04.776047 (kubelet)[1516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:39:04.777434 systemd[1]: Startup finished in 561ms (kernel) + 4.462s (initrd) + 3.643s (userspace) = 8.667s. Feb 13 20:39:05.209889 kubelet[1516]: E0213 20:39:05.209733 1516 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:39:05.212100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:39:05.212257 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:39:09.459569 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:39:09.460667 systemd[1]: Started sshd@0-10.0.0.8:22-10.0.0.1:58836.service - OpenSSH per-connection server daemon (10.0.0.1:58836). Feb 13 20:39:09.520085 sshd[1530]: Accepted publickey for core from 10.0.0.1 port 58836 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:39:09.523539 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:39:09.536966 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:39:09.547309 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:39:09.548978 systemd-logind[1417]: New session 1 of user core. Feb 13 20:39:09.556067 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:39:09.559223 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:39:09.565261 (systemd)[1534]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:39:09.637567 systemd[1534]: Queued start job for default target default.target. Feb 13 20:39:09.648291 systemd[1534]: Created slice app.slice - User Application Slice. Feb 13 20:39:09.648335 systemd[1534]: Reached target paths.target - Paths. Feb 13 20:39:09.648347 systemd[1534]: Reached target timers.target - Timers. Feb 13 20:39:09.649551 systemd[1534]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:39:09.659330 systemd[1534]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:39:09.659391 systemd[1534]: Reached target sockets.target - Sockets. Feb 13 20:39:09.659402 systemd[1534]: Reached target basic.target - Basic System. Feb 13 20:39:09.659441 systemd[1534]: Reached target default.target - Main User Target. Feb 13 20:39:09.659466 systemd[1534]: Startup finished in 89ms. Feb 13 20:39:09.659711 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:39:09.661061 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:39:09.726238 systemd[1]: Started sshd@1-10.0.0.8:22-10.0.0.1:58852.service - OpenSSH per-connection server daemon (10.0.0.1:58852). Feb 13 20:39:09.758024 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 58852 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:39:09.759354 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:39:09.763378 systemd-logind[1417]: New session 2 of user core. Feb 13 20:39:09.772056 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:39:09.824305 sshd[1545]: pam_unix(sshd:session): session closed for user core Feb 13 20:39:09.833456 systemd[1]: sshd@1-10.0.0.8:22-10.0.0.1:58852.service: Deactivated successfully. Feb 13 20:39:09.835051 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:39:09.837875 systemd-logind[1417]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:39:09.844269 systemd[1]: Started sshd@2-10.0.0.8:22-10.0.0.1:58858.service - OpenSSH per-connection server daemon (10.0.0.1:58858). Feb 13 20:39:09.845071 systemd-logind[1417]: Removed session 2. Feb 13 20:39:09.873104 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 58858 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:39:09.874698 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:39:09.879043 systemd-logind[1417]: New session 3 of user core. Feb 13 20:39:09.888847 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:39:09.937647 sshd[1552]: pam_unix(sshd:session): session closed for user core Feb 13 20:39:09.953359 systemd[1]: sshd@2-10.0.0.8:22-10.0.0.1:58858.service: Deactivated successfully. Feb 13 20:39:09.956966 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:39:09.958215 systemd-logind[1417]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:39:09.966147 systemd[1]: Started sshd@3-10.0.0.8:22-10.0.0.1:58870.service - OpenSSH per-connection server daemon (10.0.0.1:58870). Feb 13 20:39:09.966926 systemd-logind[1417]: Removed session 3. Feb 13 20:39:09.995621 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 58870 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:39:09.996787 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:39:10.000978 systemd-logind[1417]: New session 4 of user core. Feb 13 20:39:10.013059 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:39:10.065211 sshd[1559]: pam_unix(sshd:session): session closed for user core Feb 13 20:39:10.074289 systemd[1]: sshd@3-10.0.0.8:22-10.0.0.1:58870.service: Deactivated successfully. Feb 13 20:39:10.075707 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:39:10.077058 systemd-logind[1417]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:39:10.078434 systemd[1]: Started sshd@4-10.0.0.8:22-10.0.0.1:58880.service - OpenSSH per-connection server daemon (10.0.0.1:58880). Feb 13 20:39:10.079669 systemd-logind[1417]: Removed session 4. Feb 13 20:39:10.110029 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 58880 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:39:10.111300 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:39:10.115917 systemd-logind[1417]: New session 5 of user core. Feb 13 20:39:10.128071 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:39:10.191794 sudo[1569]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:39:10.192144 sudo[1569]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:39:10.543196 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:39:10.543292 (dockerd)[1587]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:39:10.870258 dockerd[1587]: time="2025-02-13T20:39:10.870131692Z" level=info msg="Starting up" Feb 13 20:39:11.095628 dockerd[1587]: time="2025-02-13T20:39:11.095579748Z" level=info msg="Loading containers: start." Feb 13 20:39:11.248973 kernel: Initializing XFRM netlink socket Feb 13 20:39:11.313428 systemd-networkd[1373]: docker0: Link UP Feb 13 20:39:11.331251 dockerd[1587]: time="2025-02-13T20:39:11.331201644Z" level=info msg="Loading containers: done." Feb 13 20:39:11.349587 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck449041921-merged.mount: Deactivated successfully. Feb 13 20:39:11.351158 dockerd[1587]: time="2025-02-13T20:39:11.351111855Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:39:11.351259 dockerd[1587]: time="2025-02-13T20:39:11.351238235Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:39:11.351374 dockerd[1587]: time="2025-02-13T20:39:11.351355495Z" level=info msg="Daemon has completed initialization" Feb 13 20:39:11.379549 dockerd[1587]: time="2025-02-13T20:39:11.379408204Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:39:11.379799 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:39:11.883012 containerd[1426]: time="2025-02-13T20:39:11.882971951Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 20:39:12.685930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount799962016.mount: Deactivated successfully. Feb 13 20:39:14.165444 containerd[1426]: time="2025-02-13T20:39:14.165390695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:14.165948 containerd[1426]: time="2025-02-13T20:39:14.165910811Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218238" Feb 13 20:39:14.166698 containerd[1426]: time="2025-02-13T20:39:14.166667174Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:14.170971 containerd[1426]: time="2025-02-13T20:39:14.170893239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:14.171661 containerd[1426]: time="2025-02-13T20:39:14.171619416Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 2.288602881s" Feb 13 20:39:14.171721 containerd[1426]: time="2025-02-13T20:39:14.171661024Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\"" Feb 13 20:39:14.172458 containerd[1426]: time="2025-02-13T20:39:14.172408913Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 20:39:15.463686 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:39:15.473099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:15.571555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:15.575423 (kubelet)[1796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:39:15.622331 kubelet[1796]: E0213 20:39:15.622276 1796 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:39:15.624861 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:39:15.625016 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:39:15.883671 containerd[1426]: time="2025-02-13T20:39:15.883481721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:15.884287 containerd[1426]: time="2025-02-13T20:39:15.884254100Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528147" Feb 13 20:39:15.885378 containerd[1426]: time="2025-02-13T20:39:15.885328537Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:15.888724 containerd[1426]: time="2025-02-13T20:39:15.888664398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:15.889760 containerd[1426]: time="2025-02-13T20:39:15.889719123Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 1.717277935s" Feb 13 20:39:15.889760 containerd[1426]: time="2025-02-13T20:39:15.889757298Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\"" Feb 13 20:39:15.890432 containerd[1426]: time="2025-02-13T20:39:15.890270430Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 20:39:17.497230 containerd[1426]: time="2025-02-13T20:39:17.497163351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:17.498334 containerd[1426]: time="2025-02-13T20:39:17.497939037Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480802" Feb 13 20:39:17.499236 containerd[1426]: time="2025-02-13T20:39:17.499199722Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:17.502224 containerd[1426]: time="2025-02-13T20:39:17.502166255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:17.503397 containerd[1426]: time="2025-02-13T20:39:17.503363402Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 1.613054656s" Feb 13 20:39:17.503473 containerd[1426]: time="2025-02-13T20:39:17.503397203Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\"" Feb 13 20:39:17.504059 containerd[1426]: time="2025-02-13T20:39:17.503842970Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 20:39:18.489964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4120502033.mount: Deactivated successfully. Feb 13 20:39:19.211967 containerd[1426]: time="2025-02-13T20:39:19.211471175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:19.212616 containerd[1426]: time="2025-02-13T20:39:19.212577370Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363384" Feb 13 20:39:19.213602 containerd[1426]: time="2025-02-13T20:39:19.213558114Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:19.215347 containerd[1426]: time="2025-02-13T20:39:19.215309969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:19.216111 containerd[1426]: time="2025-02-13T20:39:19.216083048Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.712206371s" Feb 13 20:39:19.216183 containerd[1426]: time="2025-02-13T20:39:19.216116279Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 20:39:19.216780 containerd[1426]: time="2025-02-13T20:39:19.216636849Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 20:39:19.953340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3072022359.mount: Deactivated successfully. Feb 13 20:39:21.126157 containerd[1426]: time="2025-02-13T20:39:21.126100860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:21.127669 containerd[1426]: time="2025-02-13T20:39:21.127635574Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Feb 13 20:39:21.128690 containerd[1426]: time="2025-02-13T20:39:21.128637510Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:21.131818 containerd[1426]: time="2025-02-13T20:39:21.131783446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:21.134169 containerd[1426]: time="2025-02-13T20:39:21.134084353Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.917413642s" Feb 13 20:39:21.134169 containerd[1426]: time="2025-02-13T20:39:21.134119115Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Feb 13 20:39:21.134609 containerd[1426]: time="2025-02-13T20:39:21.134571132Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:39:21.663336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4217493740.mount: Deactivated successfully. Feb 13 20:39:21.668225 containerd[1426]: time="2025-02-13T20:39:21.668040013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:21.668839 containerd[1426]: time="2025-02-13T20:39:21.668776944Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 20:39:21.669955 containerd[1426]: time="2025-02-13T20:39:21.669668655Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:21.671510 containerd[1426]: time="2025-02-13T20:39:21.671459383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:21.672431 containerd[1426]: time="2025-02-13T20:39:21.672373532Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 537.774101ms" Feb 13 20:39:21.672431 containerd[1426]: time="2025-02-13T20:39:21.672404881Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 20:39:21.673032 containerd[1426]: time="2025-02-13T20:39:21.672823462Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 20:39:22.491201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1735117323.mount: Deactivated successfully. Feb 13 20:39:25.637611 containerd[1426]: time="2025-02-13T20:39:25.637528121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:25.649783 containerd[1426]: time="2025-02-13T20:39:25.649735242Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Feb 13 20:39:25.665639 containerd[1426]: time="2025-02-13T20:39:25.665592158Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:25.681080 containerd[1426]: time="2025-02-13T20:39:25.681014544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:25.682312 containerd[1426]: time="2025-02-13T20:39:25.682278973Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.009426182s" Feb 13 20:39:25.682378 containerd[1426]: time="2025-02-13T20:39:25.682313163Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Feb 13 20:39:25.849922 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:39:25.861152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:25.961341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:25.965042 (kubelet)[1964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:39:26.003460 kubelet[1964]: E0213 20:39:26.003358 1964 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:39:26.005995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:39:26.006140 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:39:29.829999 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:29.840196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:29.859174 systemd[1]: Reloading requested from client PID 1980 ('systemctl') (unit session-5.scope)... Feb 13 20:39:29.859188 systemd[1]: Reloading... Feb 13 20:39:29.932927 zram_generator::config[2022]: No configuration found. Feb 13 20:39:30.099939 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:39:30.152785 systemd[1]: Reloading finished in 293 ms. Feb 13 20:39:30.189605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:30.192243 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:30.193207 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:39:30.193398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:30.194714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:30.295089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:30.298025 (kubelet)[2066]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:39:30.333561 kubelet[2066]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:39:30.333561 kubelet[2066]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:39:30.333561 kubelet[2066]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:39:30.333862 kubelet[2066]: I0213 20:39:30.333614 2066 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:39:31.777926 kubelet[2066]: I0213 20:39:31.777256 2066 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:39:31.777926 kubelet[2066]: I0213 20:39:31.777295 2066 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:39:31.777926 kubelet[2066]: I0213 20:39:31.777707 2066 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:39:31.812942 kubelet[2066]: I0213 20:39:31.812891 2066 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:39:31.813376 kubelet[2066]: E0213 20:39:31.813343 2066 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:31.819588 kubelet[2066]: E0213 20:39:31.819562 2066 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:39:31.819646 kubelet[2066]: I0213 20:39:31.819591 2066 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:39:31.822567 kubelet[2066]: I0213 20:39:31.822548 2066 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:39:31.822813 kubelet[2066]: I0213 20:39:31.822785 2066 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:39:31.822991 kubelet[2066]: I0213 20:39:31.822815 2066 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:39:31.823074 kubelet[2066]: I0213 20:39:31.823063 2066 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:39:31.823074 kubelet[2066]: I0213 20:39:31.823073 2066 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:39:31.823291 kubelet[2066]: I0213 20:39:31.823273 2066 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:39:31.825699 kubelet[2066]: I0213 20:39:31.825665 2066 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:39:31.825699 kubelet[2066]: I0213 20:39:31.825697 2066 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:39:31.825936 kubelet[2066]: I0213 20:39:31.825795 2066 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:39:31.825936 kubelet[2066]: I0213 20:39:31.825814 2066 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:39:31.827723 kubelet[2066]: W0213 20:39:31.827638 2066 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 20:39:31.827723 kubelet[2066]: E0213 20:39:31.827706 2066 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:31.828002 kubelet[2066]: W0213 20:39:31.827965 2066 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 20:39:31.828038 kubelet[2066]: E0213 20:39:31.828007 2066 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:31.829448 kubelet[2066]: I0213 20:39:31.829430 2066 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:39:31.830280 kubelet[2066]: I0213 20:39:31.830249 2066 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:39:31.830405 kubelet[2066]: W0213 20:39:31.830393 2066 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:39:31.832175 kubelet[2066]: I0213 20:39:31.831805 2066 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:39:31.832175 kubelet[2066]: I0213 20:39:31.831845 2066 server.go:1287] "Started kubelet" Feb 13 20:39:31.832431 kubelet[2066]: I0213 20:39:31.832391 2066 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:39:31.834185 kubelet[2066]: I0213 20:39:31.834163 2066 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:39:31.840011 kubelet[2066]: E0213 20:39:31.839763 2066 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.8:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.8:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823df17333a0821 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:39:31.831826465 +0000 UTC m=+1.530711029,LastTimestamp:2025-02-13 20:39:31.831826465 +0000 UTC m=+1.530711029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:39:31.840350 kubelet[2066]: I0213 20:39:31.840320 2066 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:39:31.840824 kubelet[2066]: I0213 20:39:31.840466 2066 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:39:31.841093 kubelet[2066]: I0213 20:39:31.841074 2066 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:39:31.842288 kubelet[2066]: I0213 20:39:31.841206 2066 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:39:31.842593 kubelet[2066]: I0213 20:39:31.841217 2066 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:39:31.842593 kubelet[2066]: I0213 20:39:31.841229 2066 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:39:31.842724 kubelet[2066]: I0213 20:39:31.842682 2066 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:39:31.842724 kubelet[2066]: E0213 20:39:31.841582 2066 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:31.842724 kubelet[2066]: I0213 20:39:31.841459 2066 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:39:31.842798 kubelet[2066]: I0213 20:39:31.842778 2066 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:39:31.843376 kubelet[2066]: W0213 20:39:31.843339 2066 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 20:39:31.843860 kubelet[2066]: E0213 20:39:31.843605 2066 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:31.843860 kubelet[2066]: E0213 20:39:31.843528 2066 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:39:31.843860 kubelet[2066]: E0213 20:39:31.843461 2066 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="200ms" Feb 13 20:39:31.844035 kubelet[2066]: I0213 20:39:31.844011 2066 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:39:31.854892 kubelet[2066]: I0213 20:39:31.854871 2066 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:39:31.854892 kubelet[2066]: I0213 20:39:31.854890 2066 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:39:31.855000 kubelet[2066]: I0213 20:39:31.854919 2066 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:39:31.857420 kubelet[2066]: I0213 20:39:31.857393 2066 policy_none.go:49] "None policy: Start" Feb 13 20:39:31.857420 kubelet[2066]: I0213 20:39:31.857423 2066 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:39:31.857492 kubelet[2066]: I0213 20:39:31.857436 2066 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:39:31.860893 kubelet[2066]: I0213 20:39:31.860822 2066 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:39:31.861856 kubelet[2066]: I0213 20:39:31.861815 2066 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:39:31.861856 kubelet[2066]: I0213 20:39:31.861845 2066 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:39:31.861983 kubelet[2066]: I0213 20:39:31.861863 2066 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:39:31.861983 kubelet[2066]: I0213 20:39:31.861869 2066 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:39:31.861983 kubelet[2066]: E0213 20:39:31.861924 2066 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:39:31.863672 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:39:31.865414 kubelet[2066]: W0213 20:39:31.865364 2066 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 20:39:31.865476 kubelet[2066]: E0213 20:39:31.865426 2066 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:31.883632 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:39:31.886512 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:39:31.901669 kubelet[2066]: I0213 20:39:31.901639 2066 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:39:31.901854 kubelet[2066]: I0213 20:39:31.901821 2066 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:39:31.901892 kubelet[2066]: I0213 20:39:31.901840 2066 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:39:31.902310 kubelet[2066]: I0213 20:39:31.902090 2066 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:39:31.903323 kubelet[2066]: E0213 20:39:31.903293 2066 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:39:31.903395 kubelet[2066]: E0213 20:39:31.903333 2066 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 20:39:31.970167 systemd[1]: Created slice kubepods-burstable-pod925ebb4e0c5e60214067abda11d8133c.slice - libcontainer container kubepods-burstable-pod925ebb4e0c5e60214067abda11d8133c.slice. Feb 13 20:39:31.980774 kubelet[2066]: E0213 20:39:31.980737 2066 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:39:31.982262 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 20:39:31.995007 kubelet[2066]: E0213 20:39:31.994967 2066 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:39:31.997811 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 20:39:31.999350 kubelet[2066]: E0213 20:39:31.999320 2066 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:39:32.004327 kubelet[2066]: I0213 20:39:32.004295 2066 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:39:32.004855 kubelet[2066]: E0213 20:39:32.004830 2066 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Feb 13 20:39:32.043490 kubelet[2066]: I0213 20:39:32.043328 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/925ebb4e0c5e60214067abda11d8133c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"925ebb4e0c5e60214067abda11d8133c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:32.043490 kubelet[2066]: I0213 20:39:32.043368 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:32.043490 kubelet[2066]: I0213 20:39:32.043393 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:32.043490 kubelet[2066]: I0213 20:39:32.043429 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:32.043490 kubelet[2066]: I0213 20:39:32.043461 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/925ebb4e0c5e60214067abda11d8133c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"925ebb4e0c5e60214067abda11d8133c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:32.043709 kubelet[2066]: I0213 20:39:32.043489 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:32.043709 kubelet[2066]: I0213 20:39:32.043509 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:32.043709 kubelet[2066]: I0213 20:39:32.043526 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:39:32.043709 kubelet[2066]: I0213 20:39:32.043553 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/925ebb4e0c5e60214067abda11d8133c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"925ebb4e0c5e60214067abda11d8133c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:32.044476 kubelet[2066]: E0213 20:39:32.044432 2066 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="400ms" Feb 13 20:39:32.206141 kubelet[2066]: I0213 20:39:32.206112 2066 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:39:32.206480 kubelet[2066]: E0213 20:39:32.206448 2066 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Feb 13 20:39:32.284000 kubelet[2066]: E0213 20:39:32.283969 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:32.284573 containerd[1426]: time="2025-02-13T20:39:32.284527924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:925ebb4e0c5e60214067abda11d8133c,Namespace:kube-system,Attempt:0,}" Feb 13 20:39:32.295681 kubelet[2066]: E0213 20:39:32.295609 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:32.296311 containerd[1426]: time="2025-02-13T20:39:32.296111772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 20:39:32.299639 kubelet[2066]: E0213 20:39:32.299609 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:32.299958 containerd[1426]: time="2025-02-13T20:39:32.299933410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 20:39:32.446001 kubelet[2066]: E0213 20:39:32.445949 2066 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="800ms" Feb 13 20:39:32.607695 kubelet[2066]: I0213 20:39:32.607591 2066 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:39:32.607951 kubelet[2066]: E0213 20:39:32.607923 2066 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Feb 13 20:39:32.648984 kubelet[2066]: W0213 20:39:32.648867 2066 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 20:39:32.648984 kubelet[2066]: E0213 20:39:32.648952 2066 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:32.822036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3826618883.mount: Deactivated successfully. Feb 13 20:39:32.826386 containerd[1426]: time="2025-02-13T20:39:32.826344432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:39:32.827038 containerd[1426]: time="2025-02-13T20:39:32.826975140Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 20:39:32.827915 containerd[1426]: time="2025-02-13T20:39:32.827651645Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:39:32.828759 containerd[1426]: time="2025-02-13T20:39:32.828627431Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:39:32.828954 containerd[1426]: time="2025-02-13T20:39:32.828927592Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:39:32.829958 containerd[1426]: time="2025-02-13T20:39:32.829861665Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:39:32.830192 containerd[1426]: time="2025-02-13T20:39:32.830168392Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:39:32.832969 containerd[1426]: time="2025-02-13T20:39:32.832917645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:39:32.833761 containerd[1426]: time="2025-02-13T20:39:32.833726056Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.112062ms" Feb 13 20:39:32.835189 containerd[1426]: time="2025-02-13T20:39:32.835159010Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 538.98775ms" Feb 13 20:39:32.836949 containerd[1426]: time="2025-02-13T20:39:32.836884079Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 536.899708ms" Feb 13 20:39:32.846613 kubelet[2066]: W0213 20:39:32.846584 2066 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 20:39:32.847119 kubelet[2066]: E0213 20:39:32.846628 2066 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:32.854104 kubelet[2066]: W0213 20:39:32.854021 2066 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 20:39:32.854104 kubelet[2066]: E0213 20:39:32.854079 2066 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:32.932497 kubelet[2066]: W0213 20:39:32.927336 2066 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 20:39:32.932497 kubelet[2066]: E0213 20:39:32.927376 2066 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:32.998266 containerd[1426]: time="2025-02-13T20:39:32.998132848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:39:32.998266 containerd[1426]: time="2025-02-13T20:39:32.998189974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:39:32.998266 containerd[1426]: time="2025-02-13T20:39:32.998211832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:32.998424 containerd[1426]: time="2025-02-13T20:39:32.998276604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:39:32.998424 containerd[1426]: time="2025-02-13T20:39:32.998314114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:39:32.998424 containerd[1426]: time="2025-02-13T20:39:32.998296940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:32.998424 containerd[1426]: time="2025-02-13T20:39:32.998324362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:32.998525 containerd[1426]: time="2025-02-13T20:39:32.998411592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:32.999958 containerd[1426]: time="2025-02-13T20:39:32.999850271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:39:33.001560 containerd[1426]: time="2025-02-13T20:39:32.999937301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:39:33.001560 containerd[1426]: time="2025-02-13T20:39:33.000389665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:33.001560 containerd[1426]: time="2025-02-13T20:39:33.000655239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:33.019073 systemd[1]: Started cri-containerd-2b6ec82d3a9e07752120734154bc613f70d47f936735ca5d760f2c2b7bc2e9bb.scope - libcontainer container 2b6ec82d3a9e07752120734154bc613f70d47f936735ca5d760f2c2b7bc2e9bb. Feb 13 20:39:33.020130 systemd[1]: Started cri-containerd-ff5682f1ec44fba6ed07ce086f3797921c8efa26069f39d4d8cffdca6f8f07f5.scope - libcontainer container ff5682f1ec44fba6ed07ce086f3797921c8efa26069f39d4d8cffdca6f8f07f5. Feb 13 20:39:33.023413 systemd[1]: Started cri-containerd-7c9243967a26b1251e745b9e379ddbcd0e4e62dc824d4a6746075cfd0f2ca7b8.scope - libcontainer container 7c9243967a26b1251e745b9e379ddbcd0e4e62dc824d4a6746075cfd0f2ca7b8. Feb 13 20:39:33.054635 containerd[1426]: time="2025-02-13T20:39:33.054598277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:925ebb4e0c5e60214067abda11d8133c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b6ec82d3a9e07752120734154bc613f70d47f936735ca5d760f2c2b7bc2e9bb\"" Feb 13 20:39:33.055829 containerd[1426]: time="2025-02-13T20:39:33.055416694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff5682f1ec44fba6ed07ce086f3797921c8efa26069f39d4d8cffdca6f8f07f5\"" Feb 13 20:39:33.055885 kubelet[2066]: E0213 20:39:33.055619 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:33.058921 kubelet[2066]: E0213 20:39:33.056750 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:33.062380 containerd[1426]: time="2025-02-13T20:39:33.060828908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c9243967a26b1251e745b9e379ddbcd0e4e62dc824d4a6746075cfd0f2ca7b8\"" Feb 13 20:39:33.062436 kubelet[2066]: E0213 20:39:33.061366 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:33.063059 containerd[1426]: time="2025-02-13T20:39:33.063032941Z" level=info msg="CreateContainer within sandbox \"2b6ec82d3a9e07752120734154bc613f70d47f936735ca5d760f2c2b7bc2e9bb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:39:33.063248 containerd[1426]: time="2025-02-13T20:39:33.063190972Z" level=info msg="CreateContainer within sandbox \"7c9243967a26b1251e745b9e379ddbcd0e4e62dc824d4a6746075cfd0f2ca7b8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:39:33.064085 containerd[1426]: time="2025-02-13T20:39:33.064061025Z" level=info msg="CreateContainer within sandbox \"ff5682f1ec44fba6ed07ce086f3797921c8efa26069f39d4d8cffdca6f8f07f5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:39:33.077026 containerd[1426]: time="2025-02-13T20:39:33.076985893Z" level=info msg="CreateContainer within sandbox \"2b6ec82d3a9e07752120734154bc613f70d47f936735ca5d760f2c2b7bc2e9bb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"29f86879ca0593ca851bf7e1c65bfbd41fd4faf3f4ea8024054d10fc59a56d21\"" Feb 13 20:39:33.077519 containerd[1426]: time="2025-02-13T20:39:33.077475238Z" level=info msg="StartContainer for \"29f86879ca0593ca851bf7e1c65bfbd41fd4faf3f4ea8024054d10fc59a56d21\"" Feb 13 20:39:33.078985 containerd[1426]: time="2025-02-13T20:39:33.078917815Z" level=info msg="CreateContainer within sandbox \"7c9243967a26b1251e745b9e379ddbcd0e4e62dc824d4a6746075cfd0f2ca7b8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"374018a8f4a5aba52d44e8d811dd476a7aab3cd94e5ce59bcec6caaf9ce9e064\"" Feb 13 20:39:33.079338 containerd[1426]: time="2025-02-13T20:39:33.079310932Z" level=info msg="StartContainer for \"374018a8f4a5aba52d44e8d811dd476a7aab3cd94e5ce59bcec6caaf9ce9e064\"" Feb 13 20:39:33.082717 containerd[1426]: time="2025-02-13T20:39:33.082625027Z" level=info msg="CreateContainer within sandbox \"ff5682f1ec44fba6ed07ce086f3797921c8efa26069f39d4d8cffdca6f8f07f5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6c2b8544befdaed52fa0254c924442e6a7e50b58fb7ffed7001a7baef15d978e\"" Feb 13 20:39:33.083119 containerd[1426]: time="2025-02-13T20:39:33.083082950Z" level=info msg="StartContainer for \"6c2b8544befdaed52fa0254c924442e6a7e50b58fb7ffed7001a7baef15d978e\"" Feb 13 20:39:33.106065 systemd[1]: Started cri-containerd-29f86879ca0593ca851bf7e1c65bfbd41fd4faf3f4ea8024054d10fc59a56d21.scope - libcontainer container 29f86879ca0593ca851bf7e1c65bfbd41fd4faf3f4ea8024054d10fc59a56d21. Feb 13 20:39:33.109455 systemd[1]: Started cri-containerd-374018a8f4a5aba52d44e8d811dd476a7aab3cd94e5ce59bcec6caaf9ce9e064.scope - libcontainer container 374018a8f4a5aba52d44e8d811dd476a7aab3cd94e5ce59bcec6caaf9ce9e064. Feb 13 20:39:33.110295 systemd[1]: Started cri-containerd-6c2b8544befdaed52fa0254c924442e6a7e50b58fb7ffed7001a7baef15d978e.scope - libcontainer container 6c2b8544befdaed52fa0254c924442e6a7e50b58fb7ffed7001a7baef15d978e. Feb 13 20:39:33.147740 containerd[1426]: time="2025-02-13T20:39:33.144392874Z" level=info msg="StartContainer for \"29f86879ca0593ca851bf7e1c65bfbd41fd4faf3f4ea8024054d10fc59a56d21\" returns successfully" Feb 13 20:39:33.161122 containerd[1426]: time="2025-02-13T20:39:33.160077406Z" level=info msg="StartContainer for \"6c2b8544befdaed52fa0254c924442e6a7e50b58fb7ffed7001a7baef15d978e\" returns successfully" Feb 13 20:39:33.161122 containerd[1426]: time="2025-02-13T20:39:33.160158503Z" level=info msg="StartContainer for \"374018a8f4a5aba52d44e8d811dd476a7aab3cd94e5ce59bcec6caaf9ce9e064\" returns successfully" Feb 13 20:39:33.249096 kubelet[2066]: E0213 20:39:33.248959 2066 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="1.6s" Feb 13 20:39:33.410066 kubelet[2066]: I0213 20:39:33.410012 2066 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:39:33.410793 kubelet[2066]: E0213 20:39:33.410727 2066 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Feb 13 20:39:33.873929 kubelet[2066]: E0213 20:39:33.873680 2066 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:39:33.873929 kubelet[2066]: E0213 20:39:33.873791 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:33.875823 kubelet[2066]: E0213 20:39:33.875539 2066 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:39:33.875823 kubelet[2066]: E0213 20:39:33.875635 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:33.878236 kubelet[2066]: E0213 20:39:33.878143 2066 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:39:33.878346 kubelet[2066]: E0213 20:39:33.878302 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:34.880840 kubelet[2066]: E0213 20:39:34.880388 2066 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:39:34.880840 kubelet[2066]: E0213 20:39:34.880504 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:34.880840 kubelet[2066]: E0213 20:39:34.880693 2066 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:39:34.880840 kubelet[2066]: E0213 20:39:34.880768 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:34.970013 kubelet[2066]: E0213 20:39:34.969980 2066 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 20:39:35.012570 kubelet[2066]: I0213 20:39:35.012518 2066 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:39:35.021352 kubelet[2066]: E0213 20:39:35.021329 2066 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:39:35.021482 kubelet[2066]: E0213 20:39:35.021472 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:35.021572 kubelet[2066]: I0213 20:39:35.021474 2066 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 20:39:35.021572 kubelet[2066]: E0213 20:39:35.021522 2066 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 20:39:35.024658 kubelet[2066]: E0213 20:39:35.024638 2066 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:35.098982 kubelet[2066]: E0213 20:39:35.098868 2066 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823df17333a0821 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:39:31.831826465 +0000 UTC m=+1.530711029,LastTimestamp:2025-02-13 20:39:31.831826465 +0000 UTC m=+1.530711029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:39:35.145031 kubelet[2066]: I0213 20:39:35.143623 2066 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 20:39:35.151151 kubelet[2066]: E0213 20:39:35.151124 2066 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 13 20:39:35.151151 kubelet[2066]: I0213 20:39:35.151150 2066 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:35.152627 kubelet[2066]: E0213 20:39:35.152003 2066 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823df1733ec7963 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:39:31.843520867 +0000 UTC m=+1.542405391,LastTimestamp:2025-02-13 20:39:31.843520867 +0000 UTC m=+1.542405391,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:39:35.153186 kubelet[2066]: E0213 20:39:35.153085 2066 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:35.153186 kubelet[2066]: I0213 20:39:35.153106 2066 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:35.154798 kubelet[2066]: E0213 20:39:35.154773 2066 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:35.205912 kubelet[2066]: E0213 20:39:35.205800 2066 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823df17348ead88 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:39:31.854151048 +0000 UTC m=+1.553035572,LastTimestamp:2025-02-13 20:39:31.854151048 +0000 UTC m=+1.553035572,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:39:35.829096 kubelet[2066]: I0213 20:39:35.829012 2066 apiserver.go:52] "Watching apiserver" Feb 13 20:39:35.842749 kubelet[2066]: I0213 20:39:35.842712 2066 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:39:35.881731 kubelet[2066]: I0213 20:39:35.881703 2066 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:35.886681 kubelet[2066]: E0213 20:39:35.886653 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:36.777531 systemd[1]: Reloading requested from client PID 2342 ('systemctl') (unit session-5.scope)... Feb 13 20:39:36.777546 systemd[1]: Reloading... Feb 13 20:39:36.835966 zram_generator::config[2381]: No configuration found. Feb 13 20:39:36.883366 kubelet[2066]: E0213 20:39:36.883332 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:36.957458 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:39:37.022298 systemd[1]: Reloading finished in 244 ms. Feb 13 20:39:37.053160 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:37.065874 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:39:37.066165 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:37.066222 systemd[1]: kubelet.service: Consumed 1.974s CPU time, 125.3M memory peak, 0B memory swap peak. Feb 13 20:39:37.075225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:37.168977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:37.174118 (kubelet)[2423]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:39:37.211931 kubelet[2423]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:39:37.211931 kubelet[2423]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:39:37.211931 kubelet[2423]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:39:37.211931 kubelet[2423]: I0213 20:39:37.210617 2423 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:39:37.219257 kubelet[2423]: I0213 20:39:37.219215 2423 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:39:37.219257 kubelet[2423]: I0213 20:39:37.219244 2423 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:39:37.219564 kubelet[2423]: I0213 20:39:37.219535 2423 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:39:37.221975 kubelet[2423]: I0213 20:39:37.221945 2423 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:39:37.225079 kubelet[2423]: I0213 20:39:37.225038 2423 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:39:37.231837 kubelet[2423]: E0213 20:39:37.231806 2423 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:39:37.231837 kubelet[2423]: I0213 20:39:37.231835 2423 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:39:37.234564 kubelet[2423]: I0213 20:39:37.234532 2423 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:39:37.234805 kubelet[2423]: I0213 20:39:37.234775 2423 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:39:37.234978 kubelet[2423]: I0213 20:39:37.234806 2423 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:39:37.235067 kubelet[2423]: I0213 20:39:37.234989 2423 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:39:37.235067 kubelet[2423]: I0213 20:39:37.235000 2423 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:39:37.235067 kubelet[2423]: I0213 20:39:37.235044 2423 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:39:37.235191 kubelet[2423]: I0213 20:39:37.235178 2423 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:39:37.235191 kubelet[2423]: I0213 20:39:37.235191 2423 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:39:37.235244 kubelet[2423]: I0213 20:39:37.235208 2423 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:39:37.235244 kubelet[2423]: I0213 20:39:37.235217 2423 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:39:37.243612 kubelet[2423]: I0213 20:39:37.243590 2423 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:39:37.244156 kubelet[2423]: I0213 20:39:37.244141 2423 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:39:37.244606 kubelet[2423]: I0213 20:39:37.244590 2423 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:39:37.244640 kubelet[2423]: I0213 20:39:37.244620 2423 server.go:1287] "Started kubelet" Feb 13 20:39:37.244967 kubelet[2423]: I0213 20:39:37.244893 2423 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:39:37.246728 kubelet[2423]: I0213 20:39:37.245105 2423 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:39:37.246728 kubelet[2423]: I0213 20:39:37.245115 2423 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:39:37.246728 kubelet[2423]: I0213 20:39:37.245801 2423 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:39:37.246728 kubelet[2423]: I0213 20:39:37.246000 2423 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:39:37.246728 kubelet[2423]: I0213 20:39:37.246128 2423 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:39:37.247365 kubelet[2423]: E0213 20:39:37.247332 2423 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:37.247365 kubelet[2423]: I0213 20:39:37.247363 2423 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:39:37.247542 kubelet[2423]: I0213 20:39:37.247520 2423 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:39:37.249020 kubelet[2423]: I0213 20:39:37.248996 2423 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:39:37.249177 kubelet[2423]: I0213 20:39:37.249157 2423 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:39:37.249944 kubelet[2423]: I0213 20:39:37.249913 2423 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:39:37.252845 kubelet[2423]: E0213 20:39:37.252820 2423 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:39:37.253080 kubelet[2423]: I0213 20:39:37.253060 2423 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:39:37.262263 kubelet[2423]: I0213 20:39:37.262165 2423 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:39:37.264999 kubelet[2423]: I0213 20:39:37.264966 2423 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:39:37.264999 kubelet[2423]: I0213 20:39:37.264992 2423 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:39:37.265101 kubelet[2423]: I0213 20:39:37.265015 2423 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:39:37.265101 kubelet[2423]: I0213 20:39:37.265022 2423 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:39:37.265101 kubelet[2423]: E0213 20:39:37.265063 2423 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:39:37.308993 kubelet[2423]: I0213 20:39:37.308160 2423 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:39:37.308993 kubelet[2423]: I0213 20:39:37.308178 2423 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:39:37.308993 kubelet[2423]: I0213 20:39:37.308196 2423 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:39:37.308993 kubelet[2423]: I0213 20:39:37.308349 2423 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:39:37.308993 kubelet[2423]: I0213 20:39:37.308360 2423 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:39:37.308993 kubelet[2423]: I0213 20:39:37.308378 2423 policy_none.go:49] "None policy: Start" Feb 13 20:39:37.308993 kubelet[2423]: I0213 20:39:37.308385 2423 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:39:37.308993 kubelet[2423]: I0213 20:39:37.308393 2423 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:39:37.308993 kubelet[2423]: I0213 20:39:37.308483 2423 state_mem.go:75] "Updated machine memory state" Feb 13 20:39:37.312406 kubelet[2423]: I0213 20:39:37.312365 2423 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:39:37.312659 kubelet[2423]: I0213 20:39:37.312530 2423 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:39:37.312659 kubelet[2423]: I0213 20:39:37.312568 2423 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:39:37.312948 kubelet[2423]: I0213 20:39:37.312752 2423 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:39:37.314149 kubelet[2423]: E0213 20:39:37.314035 2423 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:39:37.366423 kubelet[2423]: I0213 20:39:37.366375 2423 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 20:39:37.366423 kubelet[2423]: I0213 20:39:37.366407 2423 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:37.366548 kubelet[2423]: I0213 20:39:37.366409 2423 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:37.371791 kubelet[2423]: E0213 20:39:37.371712 2423 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:37.416103 kubelet[2423]: I0213 20:39:37.416061 2423 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:39:37.421359 kubelet[2423]: I0213 20:39:37.421327 2423 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 20:39:37.421443 kubelet[2423]: I0213 20:39:37.421394 2423 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 20:39:37.451587 kubelet[2423]: I0213 20:39:37.451560 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/925ebb4e0c5e60214067abda11d8133c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"925ebb4e0c5e60214067abda11d8133c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:37.451587 kubelet[2423]: I0213 20:39:37.451602 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/925ebb4e0c5e60214067abda11d8133c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"925ebb4e0c5e60214067abda11d8133c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:37.451587 kubelet[2423]: I0213 20:39:37.451623 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:37.451587 kubelet[2423]: I0213 20:39:37.451640 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:39:37.451819 kubelet[2423]: I0213 20:39:37.451660 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/925ebb4e0c5e60214067abda11d8133c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"925ebb4e0c5e60214067abda11d8133c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:37.451819 kubelet[2423]: I0213 20:39:37.451675 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:37.451819 kubelet[2423]: I0213 20:39:37.451750 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:37.451819 kubelet[2423]: I0213 20:39:37.451765 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:37.451819 kubelet[2423]: I0213 20:39:37.451780 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:37.672122 kubelet[2423]: E0213 20:39:37.671885 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:37.672122 kubelet[2423]: E0213 20:39:37.671950 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:37.673149 kubelet[2423]: E0213 20:39:37.673124 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:38.237228 kubelet[2423]: I0213 20:39:38.237181 2423 apiserver.go:52] "Watching apiserver" Feb 13 20:39:38.247652 kubelet[2423]: I0213 20:39:38.247597 2423 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:39:38.286489 kubelet[2423]: I0213 20:39:38.286464 2423 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:38.286680 kubelet[2423]: E0213 20:39:38.286652 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:38.286757 kubelet[2423]: I0213 20:39:38.286735 2423 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 20:39:38.292068 kubelet[2423]: E0213 20:39:38.292040 2423 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:38.292189 kubelet[2423]: E0213 20:39:38.292173 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:38.296284 kubelet[2423]: E0213 20:39:38.295533 2423 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 20:39:38.296284 kubelet[2423]: E0213 20:39:38.295684 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:38.307518 kubelet[2423]: I0213 20:39:38.307463 2423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.3074493600000001 podStartE2EDuration="1.30744936s" podCreationTimestamp="2025-02-13 20:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:39:38.30741638 +0000 UTC m=+1.129340089" watchObservedRunningTime="2025-02-13 20:39:38.30744936 +0000 UTC m=+1.129373069" Feb 13 20:39:38.323122 kubelet[2423]: I0213 20:39:38.322891 2423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.322873092 podStartE2EDuration="1.322873092s" podCreationTimestamp="2025-02-13 20:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:39:38.314482994 +0000 UTC m=+1.136406703" watchObservedRunningTime="2025-02-13 20:39:38.322873092 +0000 UTC m=+1.144796801" Feb 13 20:39:38.333440 kubelet[2423]: I0213 20:39:38.332776 2423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.332758939 podStartE2EDuration="3.332758939s" podCreationTimestamp="2025-02-13 20:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:39:38.324183849 +0000 UTC m=+1.146107558" watchObservedRunningTime="2025-02-13 20:39:38.332758939 +0000 UTC m=+1.154682648" Feb 13 20:39:38.635651 sudo[1569]: pam_unix(sudo:session): session closed for user root Feb 13 20:39:38.639685 sshd[1566]: pam_unix(sshd:session): session closed for user core Feb 13 20:39:38.642461 systemd[1]: sshd@4-10.0.0.8:22-10.0.0.1:58880.service: Deactivated successfully. Feb 13 20:39:38.644105 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:39:38.645979 systemd[1]: session-5.scope: Consumed 5.607s CPU time, 155.1M memory peak, 0B memory swap peak. Feb 13 20:39:38.647101 systemd-logind[1417]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:39:38.648015 systemd-logind[1417]: Removed session 5. Feb 13 20:39:39.288503 kubelet[2423]: E0213 20:39:39.288407 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:39.288503 kubelet[2423]: E0213 20:39:39.288494 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:40.290398 kubelet[2423]: E0213 20:39:40.290356 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:41.829444 kubelet[2423]: I0213 20:39:41.829397 2423 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:39:41.830443 containerd[1426]: time="2025-02-13T20:39:41.830400303Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:39:41.832517 kubelet[2423]: I0213 20:39:41.831608 2423 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:39:42.853533 systemd[1]: Created slice kubepods-besteffort-pod65924020_b751_4b02_9625_2af959801a6e.slice - libcontainer container kubepods-besteffort-pod65924020_b751_4b02_9625_2af959801a6e.slice. Feb 13 20:39:42.871121 systemd[1]: Created slice kubepods-burstable-pode24b7648_6eb5_485b_97d3_9e8d6d3764bf.slice - libcontainer container kubepods-burstable-pode24b7648_6eb5_485b_97d3_9e8d6d3764bf.slice. Feb 13 20:39:42.885262 kubelet[2423]: I0213 20:39:42.885215 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/65924020-b751-4b02-9625-2af959801a6e-kube-proxy\") pod \"kube-proxy-922kz\" (UID: \"65924020-b751-4b02-9625-2af959801a6e\") " pod="kube-system/kube-proxy-922kz" Feb 13 20:39:42.885262 kubelet[2423]: I0213 20:39:42.885254 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e24b7648-6eb5-485b-97d3-9e8d6d3764bf-run\") pod \"kube-flannel-ds-pdsbm\" (UID: \"e24b7648-6eb5-485b-97d3-9e8d6d3764bf\") " pod="kube-flannel/kube-flannel-ds-pdsbm" Feb 13 20:39:42.885586 kubelet[2423]: I0213 20:39:42.885275 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/e24b7648-6eb5-485b-97d3-9e8d6d3764bf-cni\") pod \"kube-flannel-ds-pdsbm\" (UID: \"e24b7648-6eb5-485b-97d3-9e8d6d3764bf\") " pod="kube-flannel/kube-flannel-ds-pdsbm" Feb 13 20:39:42.885586 kubelet[2423]: I0213 20:39:42.885296 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6m58\" (UniqueName: \"kubernetes.io/projected/65924020-b751-4b02-9625-2af959801a6e-kube-api-access-t6m58\") pod \"kube-proxy-922kz\" (UID: \"65924020-b751-4b02-9625-2af959801a6e\") " pod="kube-system/kube-proxy-922kz" Feb 13 20:39:42.885586 kubelet[2423]: I0213 20:39:42.885314 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65924020-b751-4b02-9625-2af959801a6e-xtables-lock\") pod \"kube-proxy-922kz\" (UID: \"65924020-b751-4b02-9625-2af959801a6e\") " pod="kube-system/kube-proxy-922kz" Feb 13 20:39:42.885586 kubelet[2423]: I0213 20:39:42.885329 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e24b7648-6eb5-485b-97d3-9e8d6d3764bf-xtables-lock\") pod \"kube-flannel-ds-pdsbm\" (UID: \"e24b7648-6eb5-485b-97d3-9e8d6d3764bf\") " pod="kube-flannel/kube-flannel-ds-pdsbm" Feb 13 20:39:42.885586 kubelet[2423]: I0213 20:39:42.885346 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk9w5\" (UniqueName: \"kubernetes.io/projected/e24b7648-6eb5-485b-97d3-9e8d6d3764bf-kube-api-access-bk9w5\") pod \"kube-flannel-ds-pdsbm\" (UID: \"e24b7648-6eb5-485b-97d3-9e8d6d3764bf\") " pod="kube-flannel/kube-flannel-ds-pdsbm" Feb 13 20:39:42.885699 kubelet[2423]: I0213 20:39:42.885362 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65924020-b751-4b02-9625-2af959801a6e-lib-modules\") pod \"kube-proxy-922kz\" (UID: \"65924020-b751-4b02-9625-2af959801a6e\") " pod="kube-system/kube-proxy-922kz" Feb 13 20:39:42.885699 kubelet[2423]: I0213 20:39:42.885377 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/e24b7648-6eb5-485b-97d3-9e8d6d3764bf-cni-plugin\") pod \"kube-flannel-ds-pdsbm\" (UID: \"e24b7648-6eb5-485b-97d3-9e8d6d3764bf\") " pod="kube-flannel/kube-flannel-ds-pdsbm" Feb 13 20:39:42.885699 kubelet[2423]: I0213 20:39:42.885391 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/e24b7648-6eb5-485b-97d3-9e8d6d3764bf-flannel-cfg\") pod \"kube-flannel-ds-pdsbm\" (UID: \"e24b7648-6eb5-485b-97d3-9e8d6d3764bf\") " pod="kube-flannel/kube-flannel-ds-pdsbm" Feb 13 20:39:43.164538 kubelet[2423]: E0213 20:39:43.164351 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:43.165108 containerd[1426]: time="2025-02-13T20:39:43.164812217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-922kz,Uid:65924020-b751-4b02-9625-2af959801a6e,Namespace:kube-system,Attempt:0,}" Feb 13 20:39:43.174318 kubelet[2423]: E0213 20:39:43.174282 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:43.174870 containerd[1426]: time="2025-02-13T20:39:43.174686821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-pdsbm,Uid:e24b7648-6eb5-485b-97d3-9e8d6d3764bf,Namespace:kube-flannel,Attempt:0,}" Feb 13 20:39:43.191409 containerd[1426]: time="2025-02-13T20:39:43.191259281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:39:43.191409 containerd[1426]: time="2025-02-13T20:39:43.191339798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:39:43.191409 containerd[1426]: time="2025-02-13T20:39:43.191379176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:43.191604 containerd[1426]: time="2025-02-13T20:39:43.191468738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:43.197859 containerd[1426]: time="2025-02-13T20:39:43.197756684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:39:43.198060 containerd[1426]: time="2025-02-13T20:39:43.197955696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:39:43.198253 containerd[1426]: time="2025-02-13T20:39:43.198208733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:43.198586 containerd[1426]: time="2025-02-13T20:39:43.198518076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:43.218063 systemd[1]: Started cri-containerd-fbb57884afba0a6758584b44ea3efb3943a06042eaa45e98b3cf5549989d6dd0.scope - libcontainer container fbb57884afba0a6758584b44ea3efb3943a06042eaa45e98b3cf5549989d6dd0. Feb 13 20:39:43.221486 systemd[1]: Started cri-containerd-c13b68c0aab13b40fd875770c2a10c200980fc7cbac220f7042c8e2cd39244d0.scope - libcontainer container c13b68c0aab13b40fd875770c2a10c200980fc7cbac220f7042c8e2cd39244d0. Feb 13 20:39:43.246164 containerd[1426]: time="2025-02-13T20:39:43.245986896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-922kz,Uid:65924020-b751-4b02-9625-2af959801a6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbb57884afba0a6758584b44ea3efb3943a06042eaa45e98b3cf5549989d6dd0\"" Feb 13 20:39:43.247321 kubelet[2423]: E0213 20:39:43.247107 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:43.251083 containerd[1426]: time="2025-02-13T20:39:43.251041752Z" level=info msg="CreateContainer within sandbox \"fbb57884afba0a6758584b44ea3efb3943a06042eaa45e98b3cf5549989d6dd0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:39:43.254630 containerd[1426]: time="2025-02-13T20:39:43.254603118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-pdsbm,Uid:e24b7648-6eb5-485b-97d3-9e8d6d3764bf,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"c13b68c0aab13b40fd875770c2a10c200980fc7cbac220f7042c8e2cd39244d0\"" Feb 13 20:39:43.255332 kubelet[2423]: E0213 20:39:43.255298 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:43.258018 containerd[1426]: time="2025-02-13T20:39:43.257889277Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:39:43.265445 containerd[1426]: time="2025-02-13T20:39:43.265409913Z" level=info msg="CreateContainer within sandbox \"fbb57884afba0a6758584b44ea3efb3943a06042eaa45e98b3cf5549989d6dd0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d5ba373d3a06dd3e7f41bef8ccb2ea3b66444c082d3e15aeec19ebcea7600660\"" Feb 13 20:39:43.266141 containerd[1426]: time="2025-02-13T20:39:43.266104954Z" level=info msg="StartContainer for \"d5ba373d3a06dd3e7f41bef8ccb2ea3b66444c082d3e15aeec19ebcea7600660\"" Feb 13 20:39:43.303052 systemd[1]: Started cri-containerd-d5ba373d3a06dd3e7f41bef8ccb2ea3b66444c082d3e15aeec19ebcea7600660.scope - libcontainer container d5ba373d3a06dd3e7f41bef8ccb2ea3b66444c082d3e15aeec19ebcea7600660. Feb 13 20:39:43.328490 containerd[1426]: time="2025-02-13T20:39:43.328447689Z" level=info msg="StartContainer for \"d5ba373d3a06dd3e7f41bef8ccb2ea3b66444c082d3e15aeec19ebcea7600660\" returns successfully" Feb 13 20:39:43.844817 kubelet[2423]: E0213 20:39:43.844783 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:44.297973 kubelet[2423]: E0213 20:39:44.297936 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:44.299493 kubelet[2423]: E0213 20:39:44.298106 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:44.316347 kubelet[2423]: I0213 20:39:44.315822 2423 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-922kz" podStartSLOduration=2.315805123 podStartE2EDuration="2.315805123s" podCreationTimestamp="2025-02-13 20:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:39:44.308170857 +0000 UTC m=+7.130094566" watchObservedRunningTime="2025-02-13 20:39:44.315805123 +0000 UTC m=+7.137728832" Feb 13 20:39:45.062172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2522943474.mount: Deactivated successfully. Feb 13 20:39:45.094887 containerd[1426]: time="2025-02-13T20:39:45.094836688Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:45.095932 containerd[1426]: time="2025-02-13T20:39:45.095771917Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 20:39:45.097850 containerd[1426]: time="2025-02-13T20:39:45.096780216Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:45.098816 containerd[1426]: time="2025-02-13T20:39:45.098779368Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:45.099782 containerd[1426]: time="2025-02-13T20:39:45.099747250Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.841436059s" Feb 13 20:39:45.099782 containerd[1426]: time="2025-02-13T20:39:45.099781705Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 20:39:45.102546 containerd[1426]: time="2025-02-13T20:39:45.102492752Z" level=info msg="CreateContainer within sandbox \"c13b68c0aab13b40fd875770c2a10c200980fc7cbac220f7042c8e2cd39244d0\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 20:39:45.111271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2598605850.mount: Deactivated successfully. Feb 13 20:39:45.112108 containerd[1426]: time="2025-02-13T20:39:45.111954488Z" level=info msg="CreateContainer within sandbox \"c13b68c0aab13b40fd875770c2a10c200980fc7cbac220f7042c8e2cd39244d0\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"639d8ea9042fdd5aeb58f29189c7390a438bd3efdf308e365aced9cf28d018fb\"" Feb 13 20:39:45.112471 containerd[1426]: time="2025-02-13T20:39:45.112412718Z" level=info msg="StartContainer for \"639d8ea9042fdd5aeb58f29189c7390a438bd3efdf308e365aced9cf28d018fb\"" Feb 13 20:39:45.147045 systemd[1]: Started cri-containerd-639d8ea9042fdd5aeb58f29189c7390a438bd3efdf308e365aced9cf28d018fb.scope - libcontainer container 639d8ea9042fdd5aeb58f29189c7390a438bd3efdf308e365aced9cf28d018fb. Feb 13 20:39:45.172112 containerd[1426]: time="2025-02-13T20:39:45.172066571Z" level=info msg="StartContainer for \"639d8ea9042fdd5aeb58f29189c7390a438bd3efdf308e365aced9cf28d018fb\" returns successfully" Feb 13 20:39:45.174891 systemd[1]: cri-containerd-639d8ea9042fdd5aeb58f29189c7390a438bd3efdf308e365aced9cf28d018fb.scope: Deactivated successfully. Feb 13 20:39:45.209621 containerd[1426]: time="2025-02-13T20:39:45.209557885Z" level=info msg="shim disconnected" id=639d8ea9042fdd5aeb58f29189c7390a438bd3efdf308e365aced9cf28d018fb namespace=k8s.io Feb 13 20:39:45.209621 containerd[1426]: time="2025-02-13T20:39:45.209615589Z" level=warning msg="cleaning up after shim disconnected" id=639d8ea9042fdd5aeb58f29189c7390a438bd3efdf308e365aced9cf28d018fb namespace=k8s.io Feb 13 20:39:45.209621 containerd[1426]: time="2025-02-13T20:39:45.209625233Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:39:45.301304 kubelet[2423]: E0213 20:39:45.301270 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:45.302427 kubelet[2423]: E0213 20:39:45.301748 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:45.302427 kubelet[2423]: E0213 20:39:45.302129 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:45.303320 containerd[1426]: time="2025-02-13T20:39:45.303133127Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 20:39:46.012965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-639d8ea9042fdd5aeb58f29189c7390a438bd3efdf308e365aced9cf28d018fb-rootfs.mount: Deactivated successfully. Feb 13 20:39:46.438089 containerd[1426]: time="2025-02-13T20:39:46.438036726Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:39:46.438503 containerd[1426]: time="2025-02-13T20:39:46.438138046Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=11054" Feb 13 20:39:46.438548 kubelet[2423]: E0213 20:39:46.438234 2423 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:39:46.438548 kubelet[2423]: E0213 20:39:46.438281 2423 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:39:46.440297 kubelet[2423]: E0213 20:39:46.440244 2423 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bk9w5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-pdsbm_kube-flannel(e24b7648-6eb5-485b-97d3-9e8d6d3764bf): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:39:46.441478 kubelet[2423]: E0213 20:39:46.441434 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:39:46.483813 kubelet[2423]: E0213 20:39:46.483709 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:47.304444 kubelet[2423]: E0213 20:39:47.304218 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:47.304928 kubelet[2423]: E0213 20:39:47.304884 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:47.305891 kubelet[2423]: E0213 20:39:47.305515 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:39:47.541477 kubelet[2423]: E0213 20:39:47.541434 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:48.113124 update_engine[1420]: I20250213 20:39:48.113039 1420 update_attempter.cc:509] Updating boot flags... Feb 13 20:39:48.133931 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2813) Feb 13 20:39:48.180338 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2811) Feb 13 20:39:48.306580 kubelet[2423]: E0213 20:39:48.306537 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:40:00.266379 kubelet[2423]: E0213 20:40:00.266332 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:40:00.267809 containerd[1426]: time="2025-02-13T20:40:00.267755254Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 20:40:01.397399 containerd[1426]: time="2025-02-13T20:40:01.397334893Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:40:01.397846 containerd[1426]: time="2025-02-13T20:40:01.397413909Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=11052" Feb 13 20:40:01.397881 kubelet[2423]: E0213 20:40:01.397560 2423 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:40:01.397881 kubelet[2423]: E0213 20:40:01.397611 2423 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:40:01.398959 kubelet[2423]: E0213 20:40:01.397697 2423 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bk9w5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-pdsbm_kube-flannel(e24b7648-6eb5-485b-97d3-9e8d6d3764bf): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:40:01.399105 kubelet[2423]: E0213 20:40:01.399061 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:40:04.567238 systemd[1]: Started sshd@5-10.0.0.8:22-10.0.0.1:40694.service - OpenSSH per-connection server daemon (10.0.0.1:40694). Feb 13 20:40:04.602686 sshd[2822]: Accepted publickey for core from 10.0.0.1 port 40694 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:04.604036 sshd[2822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:04.608598 systemd-logind[1417]: New session 6 of user core. Feb 13 20:40:04.623080 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:40:04.736082 sshd[2822]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:04.739410 systemd[1]: sshd@5-10.0.0.8:22-10.0.0.1:40694.service: Deactivated successfully. Feb 13 20:40:04.741524 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:40:04.742166 systemd-logind[1417]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:40:04.743119 systemd-logind[1417]: Removed session 6. Feb 13 20:40:09.747560 systemd[1]: Started sshd@6-10.0.0.8:22-10.0.0.1:40706.service - OpenSSH per-connection server daemon (10.0.0.1:40706). Feb 13 20:40:09.779565 sshd[2838]: Accepted publickey for core from 10.0.0.1 port 40706 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:09.780827 sshd[2838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:09.784885 systemd-logind[1417]: New session 7 of user core. Feb 13 20:40:09.793041 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:40:09.898070 sshd[2838]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:09.901265 systemd[1]: sshd@6-10.0.0.8:22-10.0.0.1:40706.service: Deactivated successfully. Feb 13 20:40:09.902812 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:40:09.903420 systemd-logind[1417]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:40:09.904155 systemd-logind[1417]: Removed session 7. Feb 13 20:40:14.265540 kubelet[2423]: E0213 20:40:14.265469 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:40:14.266573 kubelet[2423]: E0213 20:40:14.266350 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:40:14.909538 systemd[1]: Started sshd@7-10.0.0.8:22-10.0.0.1:55100.service - OpenSSH per-connection server daemon (10.0.0.1:55100). Feb 13 20:40:14.941168 sshd[2856]: Accepted publickey for core from 10.0.0.1 port 55100 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:14.942327 sshd[2856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:14.945824 systemd-logind[1417]: New session 8 of user core. Feb 13 20:40:14.956094 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:40:15.066282 sshd[2856]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:15.069213 systemd[1]: sshd@7-10.0.0.8:22-10.0.0.1:55100.service: Deactivated successfully. Feb 13 20:40:15.072331 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:40:15.073015 systemd-logind[1417]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:40:15.074142 systemd-logind[1417]: Removed session 8. Feb 13 20:40:20.076418 systemd[1]: Started sshd@8-10.0.0.8:22-10.0.0.1:55104.service - OpenSSH per-connection server daemon (10.0.0.1:55104). Feb 13 20:40:20.107957 sshd[2871]: Accepted publickey for core from 10.0.0.1 port 55104 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:20.109283 sshd[2871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:20.113312 systemd-logind[1417]: New session 9 of user core. Feb 13 20:40:20.122049 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:40:20.225997 sshd[2871]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:20.229466 systemd[1]: sshd@8-10.0.0.8:22-10.0.0.1:55104.service: Deactivated successfully. Feb 13 20:40:20.231313 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:40:20.232048 systemd-logind[1417]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:40:20.232793 systemd-logind[1417]: Removed session 9. Feb 13 20:40:25.236338 systemd[1]: Started sshd@9-10.0.0.8:22-10.0.0.1:59406.service - OpenSSH per-connection server daemon (10.0.0.1:59406). Feb 13 20:40:25.266129 kubelet[2423]: E0213 20:40:25.266087 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:40:25.268181 containerd[1426]: time="2025-02-13T20:40:25.268034453Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 20:40:25.269591 sshd[2886]: Accepted publickey for core from 10.0.0.1 port 59406 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:25.270790 sshd[2886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:25.275950 systemd-logind[1417]: New session 10 of user core. Feb 13 20:40:25.286056 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:40:25.393940 sshd[2886]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:25.397362 systemd[1]: sshd@9-10.0.0.8:22-10.0.0.1:59406.service: Deactivated successfully. Feb 13 20:40:25.399037 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:40:25.399674 systemd-logind[1417]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:40:25.400648 systemd-logind[1417]: Removed session 10. Feb 13 20:40:26.370547 containerd[1426]: time="2025-02-13T20:40:26.370491178Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:40:26.371053 containerd[1426]: time="2025-02-13T20:40:26.370575187Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=11053" Feb 13 20:40:26.371094 kubelet[2423]: E0213 20:40:26.370657 2423 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:40:26.371094 kubelet[2423]: E0213 20:40:26.370695 2423 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:40:26.371369 kubelet[2423]: E0213 20:40:26.370779 2423 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bk9w5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-pdsbm_kube-flannel(e24b7648-6eb5-485b-97d3-9e8d6d3764bf): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:40:26.372810 kubelet[2423]: E0213 20:40:26.372762 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:40:30.405488 systemd[1]: Started sshd@10-10.0.0.8:22-10.0.0.1:59410.service - OpenSSH per-connection server daemon (10.0.0.1:59410). Feb 13 20:40:30.436694 sshd[2901]: Accepted publickey for core from 10.0.0.1 port 59410 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:30.437835 sshd[2901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:30.441639 systemd-logind[1417]: New session 11 of user core. Feb 13 20:40:30.451027 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:40:30.554094 sshd[2901]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:30.557213 systemd[1]: sshd@10-10.0.0.8:22-10.0.0.1:59410.service: Deactivated successfully. Feb 13 20:40:30.558878 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:40:30.559481 systemd-logind[1417]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:40:30.560275 systemd-logind[1417]: Removed session 11. Feb 13 20:40:35.568428 systemd[1]: Started sshd@11-10.0.0.8:22-10.0.0.1:48986.service - OpenSSH per-connection server daemon (10.0.0.1:48986). Feb 13 20:40:35.600319 sshd[2916]: Accepted publickey for core from 10.0.0.1 port 48986 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:35.601433 sshd[2916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:35.604956 systemd-logind[1417]: New session 12 of user core. Feb 13 20:40:35.616108 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:40:35.720530 sshd[2916]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:35.724775 systemd[1]: sshd@11-10.0.0.8:22-10.0.0.1:48986.service: Deactivated successfully. Feb 13 20:40:35.727114 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:40:35.728110 systemd-logind[1417]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:40:35.729010 systemd-logind[1417]: Removed session 12. Feb 13 20:40:37.266604 kubelet[2423]: E0213 20:40:37.266544 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:40:37.267817 kubelet[2423]: E0213 20:40:37.267770 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:40:40.733534 systemd[1]: Started sshd@12-10.0.0.8:22-10.0.0.1:48998.service - OpenSSH per-connection server daemon (10.0.0.1:48998). Feb 13 20:40:40.765718 sshd[2933]: Accepted publickey for core from 10.0.0.1 port 48998 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:40.766874 sshd[2933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:40.770398 systemd-logind[1417]: New session 13 of user core. Feb 13 20:40:40.779062 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:40:40.884453 sshd[2933]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:40.887519 systemd[1]: sshd@12-10.0.0.8:22-10.0.0.1:48998.service: Deactivated successfully. Feb 13 20:40:40.889925 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:40:40.890497 systemd-logind[1417]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:40:40.891336 systemd-logind[1417]: Removed session 13. Feb 13 20:40:45.895364 systemd[1]: Started sshd@13-10.0.0.8:22-10.0.0.1:48204.service - OpenSSH per-connection server daemon (10.0.0.1:48204). Feb 13 20:40:45.927193 sshd[2950]: Accepted publickey for core from 10.0.0.1 port 48204 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:45.928469 sshd[2950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:45.931935 systemd-logind[1417]: New session 14 of user core. Feb 13 20:40:45.938096 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:40:46.040861 sshd[2950]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:46.044036 systemd[1]: sshd@13-10.0.0.8:22-10.0.0.1:48204.service: Deactivated successfully. Feb 13 20:40:46.046272 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:40:46.047020 systemd-logind[1417]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:40:46.048113 systemd-logind[1417]: Removed session 14. Feb 13 20:40:50.267088 kubelet[2423]: E0213 20:40:50.266670 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:40:50.267554 kubelet[2423]: E0213 20:40:50.267431 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:40:51.051491 systemd[1]: Started sshd@14-10.0.0.8:22-10.0.0.1:48216.service - OpenSSH per-connection server daemon (10.0.0.1:48216). Feb 13 20:40:51.083486 sshd[2965]: Accepted publickey for core from 10.0.0.1 port 48216 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:51.084766 sshd[2965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:51.089753 systemd-logind[1417]: New session 15 of user core. Feb 13 20:40:51.104089 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:40:51.215008 sshd[2965]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:51.217728 systemd[1]: sshd@14-10.0.0.8:22-10.0.0.1:48216.service: Deactivated successfully. Feb 13 20:40:51.219299 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:40:51.220559 systemd-logind[1417]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:40:51.221448 systemd-logind[1417]: Removed session 15. Feb 13 20:40:56.229418 systemd[1]: Started sshd@15-10.0.0.8:22-10.0.0.1:42334.service - OpenSSH per-connection server daemon (10.0.0.1:42334). Feb 13 20:40:56.262278 sshd[2980]: Accepted publickey for core from 10.0.0.1 port 42334 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:56.263480 sshd[2980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:56.267708 systemd-logind[1417]: New session 16 of user core. Feb 13 20:40:56.276117 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:40:56.387160 sshd[2980]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:56.391030 systemd[1]: sshd@15-10.0.0.8:22-10.0.0.1:42334.service: Deactivated successfully. Feb 13 20:40:56.392739 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:40:56.394341 systemd-logind[1417]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:40:56.395117 systemd-logind[1417]: Removed session 16. Feb 13 20:40:58.265851 kubelet[2423]: E0213 20:40:58.265819 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:00.265579 kubelet[2423]: E0213 20:41:00.265548 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:01.397121 systemd[1]: Started sshd@16-10.0.0.8:22-10.0.0.1:42348.service - OpenSSH per-connection server daemon (10.0.0.1:42348). Feb 13 20:41:01.428604 sshd[2995]: Accepted publickey for core from 10.0.0.1 port 42348 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:01.429848 sshd[2995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:01.433345 systemd-logind[1417]: New session 17 of user core. Feb 13 20:41:01.448119 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:41:01.554027 sshd[2995]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:01.557137 systemd[1]: sshd@16-10.0.0.8:22-10.0.0.1:42348.service: Deactivated successfully. Feb 13 20:41:01.559293 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:41:01.559963 systemd-logind[1417]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:41:01.560742 systemd-logind[1417]: Removed session 17. Feb 13 20:41:02.266115 kubelet[2423]: E0213 20:41:02.266074 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:02.267353 kubelet[2423]: E0213 20:41:02.267083 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:41:05.267589 kubelet[2423]: E0213 20:41:05.267504 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:06.567403 systemd[1]: Started sshd@17-10.0.0.8:22-10.0.0.1:33084.service - OpenSSH per-connection server daemon (10.0.0.1:33084). Feb 13 20:41:06.599178 sshd[3011]: Accepted publickey for core from 10.0.0.1 port 33084 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:06.600347 sshd[3011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:06.604257 systemd-logind[1417]: New session 18 of user core. Feb 13 20:41:06.611056 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:41:06.715267 sshd[3011]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:06.718463 systemd[1]: sshd@17-10.0.0.8:22-10.0.0.1:33084.service: Deactivated successfully. Feb 13 20:41:06.720159 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:41:06.720713 systemd-logind[1417]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:41:06.721590 systemd-logind[1417]: Removed session 18. Feb 13 20:41:08.265444 kubelet[2423]: E0213 20:41:08.265414 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:11.728924 systemd[1]: Started sshd@18-10.0.0.8:22-10.0.0.1:33088.service - OpenSSH per-connection server daemon (10.0.0.1:33088). Feb 13 20:41:11.761646 sshd[3027]: Accepted publickey for core from 10.0.0.1 port 33088 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:11.762974 sshd[3027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:11.766604 systemd-logind[1417]: New session 19 of user core. Feb 13 20:41:11.777038 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:41:11.881327 sshd[3027]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:11.885144 systemd[1]: sshd@18-10.0.0.8:22-10.0.0.1:33088.service: Deactivated successfully. Feb 13 20:41:11.886726 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:41:11.888227 systemd-logind[1417]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:41:11.889583 systemd-logind[1417]: Removed session 19. Feb 13 20:41:15.266687 kubelet[2423]: E0213 20:41:15.266447 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:15.268168 containerd[1426]: time="2025-02-13T20:41:15.268030061Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 20:41:16.385421 containerd[1426]: time="2025-02-13T20:41:16.385354708Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:41:16.385789 containerd[1426]: time="2025-02-13T20:41:16.385429469Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=11054" Feb 13 20:41:16.385819 kubelet[2423]: E0213 20:41:16.385574 2423 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:41:16.385819 kubelet[2423]: E0213 20:41:16.385623 2423 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:41:16.386117 kubelet[2423]: E0213 20:41:16.385713 2423 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bk9w5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-pdsbm_kube-flannel(e24b7648-6eb5-485b-97d3-9e8d6d3764bf): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:41:16.386928 kubelet[2423]: E0213 20:41:16.386879 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:41:16.891927 systemd[1]: Started sshd@19-10.0.0.8:22-10.0.0.1:39274.service - OpenSSH per-connection server daemon (10.0.0.1:39274). Feb 13 20:41:16.923752 sshd[3045]: Accepted publickey for core from 10.0.0.1 port 39274 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:16.927093 sshd[3045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:16.930821 systemd-logind[1417]: New session 20 of user core. Feb 13 20:41:16.946043 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:41:17.052123 sshd[3045]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:17.055226 systemd[1]: sshd@19-10.0.0.8:22-10.0.0.1:39274.service: Deactivated successfully. Feb 13 20:41:17.057024 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:41:17.057610 systemd-logind[1417]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:41:17.058393 systemd-logind[1417]: Removed session 20. Feb 13 20:41:22.062483 systemd[1]: Started sshd@20-10.0.0.8:22-10.0.0.1:39288.service - OpenSSH per-connection server daemon (10.0.0.1:39288). Feb 13 20:41:22.095100 sshd[3061]: Accepted publickey for core from 10.0.0.1 port 39288 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:22.096330 sshd[3061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:22.099770 systemd-logind[1417]: New session 21 of user core. Feb 13 20:41:22.111047 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:41:22.218606 sshd[3061]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:22.222079 systemd[1]: sshd@20-10.0.0.8:22-10.0.0.1:39288.service: Deactivated successfully. Feb 13 20:41:22.224051 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:41:22.224783 systemd-logind[1417]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:41:22.225660 systemd-logind[1417]: Removed session 21. Feb 13 20:41:27.229423 systemd[1]: Started sshd@21-10.0.0.8:22-10.0.0.1:39352.service - OpenSSH per-connection server daemon (10.0.0.1:39352). Feb 13 20:41:27.262373 sshd[3076]: Accepted publickey for core from 10.0.0.1 port 39352 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:27.263560 sshd[3076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:27.267631 systemd-logind[1417]: New session 22 of user core. Feb 13 20:41:27.277018 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:41:27.391128 sshd[3076]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:27.394314 systemd[1]: sshd@21-10.0.0.8:22-10.0.0.1:39352.service: Deactivated successfully. Feb 13 20:41:27.397321 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:41:27.397940 systemd-logind[1417]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:41:27.398832 systemd-logind[1417]: Removed session 22. Feb 13 20:41:29.266655 kubelet[2423]: E0213 20:41:29.266384 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:29.267556 kubelet[2423]: E0213 20:41:29.267449 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:41:32.401501 systemd[1]: Started sshd@22-10.0.0.8:22-10.0.0.1:39360.service - OpenSSH per-connection server daemon (10.0.0.1:39360). Feb 13 20:41:32.433792 sshd[3092]: Accepted publickey for core from 10.0.0.1 port 39360 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:32.435058 sshd[3092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:32.438997 systemd-logind[1417]: New session 23 of user core. Feb 13 20:41:32.450036 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:41:32.555009 sshd[3092]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:32.558527 systemd[1]: sshd@22-10.0.0.8:22-10.0.0.1:39360.service: Deactivated successfully. Feb 13 20:41:32.561142 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:41:32.562020 systemd-logind[1417]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:41:32.562857 systemd-logind[1417]: Removed session 23. Feb 13 20:41:37.302556 kubelet[2423]: E0213 20:41:37.302504 2423 kubelet_node_status.go:461] "Node not becoming ready in time after startup" Feb 13 20:41:37.339716 kubelet[2423]: E0213 20:41:37.339681 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:41:37.566430 systemd[1]: Started sshd@23-10.0.0.8:22-10.0.0.1:40812.service - OpenSSH per-connection server daemon (10.0.0.1:40812). Feb 13 20:41:37.598205 sshd[3109]: Accepted publickey for core from 10.0.0.1 port 40812 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:37.599412 sshd[3109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:37.603364 systemd-logind[1417]: New session 24 of user core. Feb 13 20:41:37.617047 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:41:37.723330 sshd[3109]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:37.726701 systemd[1]: sshd@23-10.0.0.8:22-10.0.0.1:40812.service: Deactivated successfully. Feb 13 20:41:37.728360 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:41:37.730666 systemd-logind[1417]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:41:37.731561 systemd-logind[1417]: Removed session 24. Feb 13 20:41:42.340600 kubelet[2423]: E0213 20:41:42.340565 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:41:42.734374 systemd[1]: Started sshd@24-10.0.0.8:22-10.0.0.1:32964.service - OpenSSH per-connection server daemon (10.0.0.1:32964). Feb 13 20:41:42.766170 sshd[3124]: Accepted publickey for core from 10.0.0.1 port 32964 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:42.767379 sshd[3124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:42.771094 systemd-logind[1417]: New session 25 of user core. Feb 13 20:41:42.787049 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:41:42.890797 sshd[3124]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:42.894069 systemd[1]: sshd@24-10.0.0.8:22-10.0.0.1:32964.service: Deactivated successfully. Feb 13 20:41:42.896782 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:41:42.898005 systemd-logind[1417]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:41:42.899234 systemd-logind[1417]: Removed session 25. Feb 13 20:41:44.266078 kubelet[2423]: E0213 20:41:44.266026 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:44.266708 kubelet[2423]: E0213 20:41:44.266654 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:41:47.341682 kubelet[2423]: E0213 20:41:47.341644 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:41:47.901855 systemd[1]: Started sshd@25-10.0.0.8:22-10.0.0.1:32966.service - OpenSSH per-connection server daemon (10.0.0.1:32966). Feb 13 20:41:47.934015 sshd[3142]: Accepted publickey for core from 10.0.0.1 port 32966 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:47.935237 sshd[3142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:47.938747 systemd-logind[1417]: New session 26 of user core. Feb 13 20:41:47.949045 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:41:48.055132 sshd[3142]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:48.058398 systemd[1]: sshd@25-10.0.0.8:22-10.0.0.1:32966.service: Deactivated successfully. Feb 13 20:41:48.061138 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:41:48.061990 systemd-logind[1417]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:41:48.062847 systemd-logind[1417]: Removed session 26. Feb 13 20:41:52.342706 kubelet[2423]: E0213 20:41:52.342664 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:41:53.065393 systemd[1]: Started sshd@26-10.0.0.8:22-10.0.0.1:35868.service - OpenSSH per-connection server daemon (10.0.0.1:35868). Feb 13 20:41:53.097323 sshd[3157]: Accepted publickey for core from 10.0.0.1 port 35868 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:53.098492 sshd[3157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:53.102627 systemd-logind[1417]: New session 27 of user core. Feb 13 20:41:53.114082 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:41:53.225106 sshd[3157]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:53.228311 systemd-logind[1417]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:41:53.229094 systemd[1]: sshd@26-10.0.0.8:22-10.0.0.1:35868.service: Deactivated successfully. Feb 13 20:41:53.230863 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:41:53.231867 systemd-logind[1417]: Removed session 27. Feb 13 20:41:57.266289 kubelet[2423]: E0213 20:41:57.265971 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:57.267223 kubelet[2423]: E0213 20:41:57.267145 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:41:57.343543 kubelet[2423]: E0213 20:41:57.343515 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:41:58.235637 systemd[1]: Started sshd@27-10.0.0.8:22-10.0.0.1:35870.service - OpenSSH per-connection server daemon (10.0.0.1:35870). Feb 13 20:41:58.267792 sshd[3173]: Accepted publickey for core from 10.0.0.1 port 35870 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:58.269020 sshd[3173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:58.273084 systemd-logind[1417]: New session 28 of user core. Feb 13 20:41:58.282045 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:41:58.388957 sshd[3173]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:58.392856 systemd[1]: sshd@27-10.0.0.8:22-10.0.0.1:35870.service: Deactivated successfully. Feb 13 20:41:58.394658 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:41:58.396391 systemd-logind[1417]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:41:58.397274 systemd-logind[1417]: Removed session 28. Feb 13 20:42:02.344434 kubelet[2423]: E0213 20:42:02.344345 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:03.403450 systemd[1]: Started sshd@28-10.0.0.8:22-10.0.0.1:53560.service - OpenSSH per-connection server daemon (10.0.0.1:53560). Feb 13 20:42:03.435792 sshd[3189]: Accepted publickey for core from 10.0.0.1 port 53560 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:03.436997 sshd[3189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:03.441055 systemd-logind[1417]: New session 29 of user core. Feb 13 20:42:03.456041 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:42:03.563343 sshd[3189]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:03.567258 systemd[1]: sshd@28-10.0.0.8:22-10.0.0.1:53560.service: Deactivated successfully. Feb 13 20:42:03.569051 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:42:03.569582 systemd-logind[1417]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:42:03.570500 systemd-logind[1417]: Removed session 29. Feb 13 20:42:07.344908 kubelet[2423]: E0213 20:42:07.344864 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:08.265645 kubelet[2423]: E0213 20:42:08.265603 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:08.266249 kubelet[2423]: E0213 20:42:08.266207 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:42:08.574636 systemd[1]: Started sshd@29-10.0.0.8:22-10.0.0.1:53574.service - OpenSSH per-connection server daemon (10.0.0.1:53574). Feb 13 20:42:08.606654 sshd[3204]: Accepted publickey for core from 10.0.0.1 port 53574 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:08.607841 sshd[3204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:08.611928 systemd-logind[1417]: New session 30 of user core. Feb 13 20:42:08.621056 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 20:42:08.726843 sshd[3204]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:08.730274 systemd[1]: sshd@29-10.0.0.8:22-10.0.0.1:53574.service: Deactivated successfully. Feb 13 20:42:08.732590 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 20:42:08.733631 systemd-logind[1417]: Session 30 logged out. Waiting for processes to exit. Feb 13 20:42:08.735502 systemd-logind[1417]: Removed session 30. Feb 13 20:42:12.346465 kubelet[2423]: E0213 20:42:12.346410 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:13.737556 systemd[1]: Started sshd@30-10.0.0.8:22-10.0.0.1:50936.service - OpenSSH per-connection server daemon (10.0.0.1:50936). Feb 13 20:42:13.771082 sshd[3225]: Accepted publickey for core from 10.0.0.1 port 50936 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:13.772302 sshd[3225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:13.776438 systemd-logind[1417]: New session 31 of user core. Feb 13 20:42:13.789076 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 20:42:13.892143 sshd[3225]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:13.895397 systemd[1]: sshd@30-10.0.0.8:22-10.0.0.1:50936.service: Deactivated successfully. Feb 13 20:42:13.897841 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 20:42:13.898671 systemd-logind[1417]: Session 31 logged out. Waiting for processes to exit. Feb 13 20:42:13.899521 systemd-logind[1417]: Removed session 31. Feb 13 20:42:16.266155 kubelet[2423]: E0213 20:42:16.266082 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:17.347620 kubelet[2423]: E0213 20:42:17.347586 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:18.902588 systemd[1]: Started sshd@31-10.0.0.8:22-10.0.0.1:50938.service - OpenSSH per-connection server daemon (10.0.0.1:50938). Feb 13 20:42:18.934693 sshd[3240]: Accepted publickey for core from 10.0.0.1 port 50938 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:18.936047 sshd[3240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:18.940351 systemd-logind[1417]: New session 32 of user core. Feb 13 20:42:18.950127 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 20:42:19.057239 sshd[3240]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:19.060542 systemd[1]: sshd@31-10.0.0.8:22-10.0.0.1:50938.service: Deactivated successfully. Feb 13 20:42:19.062266 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 20:42:19.062905 systemd-logind[1417]: Session 32 logged out. Waiting for processes to exit. Feb 13 20:42:19.063686 systemd-logind[1417]: Removed session 32. Feb 13 20:42:21.266251 kubelet[2423]: E0213 20:42:21.266213 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:22.266196 kubelet[2423]: E0213 20:42:22.265994 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:22.266654 kubelet[2423]: E0213 20:42:22.266604 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:42:22.348741 kubelet[2423]: E0213 20:42:22.348706 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:24.067427 systemd[1]: Started sshd@32-10.0.0.8:22-10.0.0.1:51298.service - OpenSSH per-connection server daemon (10.0.0.1:51298). Feb 13 20:42:24.099325 sshd[3255]: Accepted publickey for core from 10.0.0.1 port 51298 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:24.100532 sshd[3255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:24.104056 systemd-logind[1417]: New session 33 of user core. Feb 13 20:42:24.111046 systemd[1]: Started session-33.scope - Session 33 of User core. Feb 13 20:42:24.219880 sshd[3255]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:24.223190 systemd[1]: sshd@32-10.0.0.8:22-10.0.0.1:51298.service: Deactivated successfully. Feb 13 20:42:24.225001 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 20:42:24.225639 systemd-logind[1417]: Session 33 logged out. Waiting for processes to exit. Feb 13 20:42:24.226571 systemd-logind[1417]: Removed session 33. Feb 13 20:42:27.350044 kubelet[2423]: E0213 20:42:27.349988 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:29.239596 systemd[1]: Started sshd@33-10.0.0.8:22-10.0.0.1:51312.service - OpenSSH per-connection server daemon (10.0.0.1:51312). Feb 13 20:42:29.271068 sshd[3271]: Accepted publickey for core from 10.0.0.1 port 51312 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:29.272355 sshd[3271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:29.276231 systemd-logind[1417]: New session 34 of user core. Feb 13 20:42:29.283066 systemd[1]: Started session-34.scope - Session 34 of User core. Feb 13 20:42:29.388141 sshd[3271]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:29.391197 systemd[1]: sshd@33-10.0.0.8:22-10.0.0.1:51312.service: Deactivated successfully. Feb 13 20:42:29.392847 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 20:42:29.393439 systemd-logind[1417]: Session 34 logged out. Waiting for processes to exit. Feb 13 20:42:29.394206 systemd-logind[1417]: Removed session 34. Feb 13 20:42:30.266295 kubelet[2423]: E0213 20:42:30.266218 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:32.265832 kubelet[2423]: E0213 20:42:32.265795 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:32.351364 kubelet[2423]: E0213 20:42:32.351339 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:34.399436 systemd[1]: Started sshd@34-10.0.0.8:22-10.0.0.1:33164.service - OpenSSH per-connection server daemon (10.0.0.1:33164). Feb 13 20:42:34.431043 sshd[3286]: Accepted publickey for core from 10.0.0.1 port 33164 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:34.432236 sshd[3286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:34.435947 systemd-logind[1417]: New session 35 of user core. Feb 13 20:42:34.444064 systemd[1]: Started session-35.scope - Session 35 of User core. Feb 13 20:42:34.547774 sshd[3286]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:34.550863 systemd[1]: sshd@34-10.0.0.8:22-10.0.0.1:33164.service: Deactivated successfully. Feb 13 20:42:34.553386 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 20:42:34.554849 systemd-logind[1417]: Session 35 logged out. Waiting for processes to exit. Feb 13 20:42:34.555670 systemd-logind[1417]: Removed session 35. Feb 13 20:42:37.266506 kubelet[2423]: E0213 20:42:37.266477 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:37.267545 containerd[1426]: time="2025-02-13T20:42:37.267459963Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 20:42:37.352161 kubelet[2423]: E0213 20:42:37.352129 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:38.381593 containerd[1426]: time="2025-02-13T20:42:38.381532679Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:42:38.381995 containerd[1426]: time="2025-02-13T20:42:38.381612004Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=11054" Feb 13 20:42:38.382025 kubelet[2423]: E0213 20:42:38.381736 2423 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:42:38.382025 kubelet[2423]: E0213 20:42:38.381791 2423 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:42:38.382262 kubelet[2423]: E0213 20:42:38.381886 2423 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bk9w5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-pdsbm_kube-flannel(e24b7648-6eb5-485b-97d3-9e8d6d3764bf): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:42:38.383276 kubelet[2423]: E0213 20:42:38.383238 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:42:39.558168 systemd[1]: Started sshd@35-10.0.0.8:22-10.0.0.1:33170.service - OpenSSH per-connection server daemon (10.0.0.1:33170). Feb 13 20:42:39.589677 sshd[3303]: Accepted publickey for core from 10.0.0.1 port 33170 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:39.590876 sshd[3303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:39.594395 systemd-logind[1417]: New session 36 of user core. Feb 13 20:42:39.600038 systemd[1]: Started session-36.scope - Session 36 of User core. Feb 13 20:42:39.702842 sshd[3303]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:39.705841 systemd[1]: sshd@35-10.0.0.8:22-10.0.0.1:33170.service: Deactivated successfully. Feb 13 20:42:39.708177 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 20:42:39.709058 systemd-logind[1417]: Session 36 logged out. Waiting for processes to exit. Feb 13 20:42:39.709830 systemd-logind[1417]: Removed session 36. Feb 13 20:42:42.353704 kubelet[2423]: E0213 20:42:42.353660 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:44.713449 systemd[1]: Started sshd@36-10.0.0.8:22-10.0.0.1:40804.service - OpenSSH per-connection server daemon (10.0.0.1:40804). Feb 13 20:42:44.745868 sshd[3321]: Accepted publickey for core from 10.0.0.1 port 40804 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:44.747131 sshd[3321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:44.751074 systemd-logind[1417]: New session 37 of user core. Feb 13 20:42:44.762111 systemd[1]: Started session-37.scope - Session 37 of User core. Feb 13 20:42:44.867051 sshd[3321]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:44.869546 systemd[1]: sshd@36-10.0.0.8:22-10.0.0.1:40804.service: Deactivated successfully. Feb 13 20:42:44.871923 systemd-logind[1417]: Session 37 logged out. Waiting for processes to exit. Feb 13 20:42:44.872467 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 20:42:44.873623 systemd-logind[1417]: Removed session 37. Feb 13 20:42:47.354512 kubelet[2423]: E0213 20:42:47.354452 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:49.877411 systemd[1]: Started sshd@37-10.0.0.8:22-10.0.0.1:40810.service - OpenSSH per-connection server daemon (10.0.0.1:40810). Feb 13 20:42:49.909308 sshd[3336]: Accepted publickey for core from 10.0.0.1 port 40810 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:49.910476 sshd[3336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:49.913954 systemd-logind[1417]: New session 38 of user core. Feb 13 20:42:49.929061 systemd[1]: Started session-38.scope - Session 38 of User core. Feb 13 20:42:50.034767 sshd[3336]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:50.037865 systemd[1]: sshd@37-10.0.0.8:22-10.0.0.1:40810.service: Deactivated successfully. Feb 13 20:42:50.039476 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 20:42:50.040751 systemd-logind[1417]: Session 38 logged out. Waiting for processes to exit. Feb 13 20:42:50.041697 systemd-logind[1417]: Removed session 38. Feb 13 20:42:52.355840 kubelet[2423]: E0213 20:42:52.355795 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:54.265849 kubelet[2423]: E0213 20:42:54.265673 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:54.266354 kubelet[2423]: E0213 20:42:54.266304 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:42:55.045544 systemd[1]: Started sshd@38-10.0.0.8:22-10.0.0.1:36982.service - OpenSSH per-connection server daemon (10.0.0.1:36982). Feb 13 20:42:55.076953 sshd[3352]: Accepted publickey for core from 10.0.0.1 port 36982 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:55.078220 sshd[3352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:55.082152 systemd-logind[1417]: New session 39 of user core. Feb 13 20:42:55.088051 systemd[1]: Started session-39.scope - Session 39 of User core. Feb 13 20:42:55.194124 sshd[3352]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:55.196626 systemd[1]: sshd@38-10.0.0.8:22-10.0.0.1:36982.service: Deactivated successfully. Feb 13 20:42:55.198292 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 20:42:55.199678 systemd-logind[1417]: Session 39 logged out. Waiting for processes to exit. Feb 13 20:42:55.200488 systemd-logind[1417]: Removed session 39. Feb 13 20:42:57.356670 kubelet[2423]: E0213 20:42:57.356628 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:00.210528 systemd[1]: Started sshd@39-10.0.0.8:22-10.0.0.1:36988.service - OpenSSH per-connection server daemon (10.0.0.1:36988). Feb 13 20:43:00.242274 sshd[3367]: Accepted publickey for core from 10.0.0.1 port 36988 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:00.243424 sshd[3367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:00.246835 systemd-logind[1417]: New session 40 of user core. Feb 13 20:43:00.258062 systemd[1]: Started session-40.scope - Session 40 of User core. Feb 13 20:43:00.363167 sshd[3367]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:00.366303 systemd[1]: sshd@39-10.0.0.8:22-10.0.0.1:36988.service: Deactivated successfully. Feb 13 20:43:00.369441 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 20:43:00.370051 systemd-logind[1417]: Session 40 logged out. Waiting for processes to exit. Feb 13 20:43:00.370795 systemd-logind[1417]: Removed session 40. Feb 13 20:43:02.357507 kubelet[2423]: E0213 20:43:02.357458 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:05.377441 systemd[1]: Started sshd@40-10.0.0.8:22-10.0.0.1:50334.service - OpenSSH per-connection server daemon (10.0.0.1:50334). Feb 13 20:43:05.410435 sshd[3383]: Accepted publickey for core from 10.0.0.1 port 50334 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:05.411631 sshd[3383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:05.415315 systemd-logind[1417]: New session 41 of user core. Feb 13 20:43:05.424043 systemd[1]: Started session-41.scope - Session 41 of User core. Feb 13 20:43:05.531130 sshd[3383]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:05.541504 systemd[1]: sshd@40-10.0.0.8:22-10.0.0.1:50334.service: Deactivated successfully. Feb 13 20:43:05.543002 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 20:43:05.545067 systemd-logind[1417]: Session 41 logged out. Waiting for processes to exit. Feb 13 20:43:05.547096 systemd[1]: Started sshd@41-10.0.0.8:22-10.0.0.1:50338.service - OpenSSH per-connection server daemon (10.0.0.1:50338). Feb 13 20:43:05.550507 systemd-logind[1417]: Removed session 41. Feb 13 20:43:05.579651 sshd[3399]: Accepted publickey for core from 10.0.0.1 port 50338 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:05.581407 sshd[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:05.585471 systemd-logind[1417]: New session 42 of user core. Feb 13 20:43:05.591046 systemd[1]: Started session-42.scope - Session 42 of User core. Feb 13 20:43:05.734213 sshd[3399]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:05.746276 systemd[1]: sshd@41-10.0.0.8:22-10.0.0.1:50338.service: Deactivated successfully. Feb 13 20:43:05.749817 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 20:43:05.751449 systemd-logind[1417]: Session 42 logged out. Waiting for processes to exit. Feb 13 20:43:05.759588 systemd[1]: Started sshd@42-10.0.0.8:22-10.0.0.1:50340.service - OpenSSH per-connection server daemon (10.0.0.1:50340). Feb 13 20:43:05.760666 systemd-logind[1417]: Removed session 42. Feb 13 20:43:05.789073 sshd[3413]: Accepted publickey for core from 10.0.0.1 port 50340 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:05.790356 sshd[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:05.794778 systemd-logind[1417]: New session 43 of user core. Feb 13 20:43:05.809083 systemd[1]: Started session-43.scope - Session 43 of User core. Feb 13 20:43:05.917715 sshd[3413]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:05.920933 systemd[1]: sshd@42-10.0.0.8:22-10.0.0.1:50340.service: Deactivated successfully. Feb 13 20:43:05.922965 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 20:43:05.923556 systemd-logind[1417]: Session 43 logged out. Waiting for processes to exit. Feb 13 20:43:05.924407 systemd-logind[1417]: Removed session 43. Feb 13 20:43:06.265912 kubelet[2423]: E0213 20:43:06.265815 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:06.266684 kubelet[2423]: E0213 20:43:06.266645 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:43:07.358112 kubelet[2423]: E0213 20:43:07.358070 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:10.948159 systemd[1]: Started sshd@43-10.0.0.8:22-10.0.0.1:50350.service - OpenSSH per-connection server daemon (10.0.0.1:50350). Feb 13 20:43:10.977962 sshd[3427]: Accepted publickey for core from 10.0.0.1 port 50350 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:10.979268 sshd[3427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:10.985019 systemd-logind[1417]: New session 44 of user core. Feb 13 20:43:10.997057 systemd[1]: Started session-44.scope - Session 44 of User core. Feb 13 20:43:11.106475 sshd[3427]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:11.109621 systemd[1]: sshd@43-10.0.0.8:22-10.0.0.1:50350.service: Deactivated successfully. Feb 13 20:43:11.111187 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 20:43:11.111785 systemd-logind[1417]: Session 44 logged out. Waiting for processes to exit. Feb 13 20:43:11.112740 systemd-logind[1417]: Removed session 44. Feb 13 20:43:12.359044 kubelet[2423]: E0213 20:43:12.359003 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:16.117372 systemd[1]: Started sshd@44-10.0.0.8:22-10.0.0.1:38212.service - OpenSSH per-connection server daemon (10.0.0.1:38212). Feb 13 20:43:16.149308 sshd[3444]: Accepted publickey for core from 10.0.0.1 port 38212 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:16.150541 sshd[3444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:16.154708 systemd-logind[1417]: New session 45 of user core. Feb 13 20:43:16.168117 systemd[1]: Started session-45.scope - Session 45 of User core. Feb 13 20:43:16.275251 sshd[3444]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:16.278811 systemd[1]: sshd@44-10.0.0.8:22-10.0.0.1:38212.service: Deactivated successfully. Feb 13 20:43:16.280482 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 20:43:16.281130 systemd-logind[1417]: Session 45 logged out. Waiting for processes to exit. Feb 13 20:43:16.281894 systemd-logind[1417]: Removed session 45. Feb 13 20:43:17.360064 kubelet[2423]: E0213 20:43:17.360028 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:21.266468 kubelet[2423]: E0213 20:43:21.266425 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:21.267218 kubelet[2423]: E0213 20:43:21.267175 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:43:21.285509 systemd[1]: Started sshd@45-10.0.0.8:22-10.0.0.1:38218.service - OpenSSH per-connection server daemon (10.0.0.1:38218). Feb 13 20:43:21.319556 sshd[3459]: Accepted publickey for core from 10.0.0.1 port 38218 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:21.320746 sshd[3459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:21.324229 systemd-logind[1417]: New session 46 of user core. Feb 13 20:43:21.334019 systemd[1]: Started session-46.scope - Session 46 of User core. Feb 13 20:43:21.441793 sshd[3459]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:21.445118 systemd[1]: sshd@45-10.0.0.8:22-10.0.0.1:38218.service: Deactivated successfully. Feb 13 20:43:21.446713 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 20:43:21.447959 systemd-logind[1417]: Session 46 logged out. Waiting for processes to exit. Feb 13 20:43:21.448745 systemd-logind[1417]: Removed session 46. Feb 13 20:43:22.360868 kubelet[2423]: E0213 20:43:22.360830 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:26.452433 systemd[1]: Started sshd@46-10.0.0.8:22-10.0.0.1:53816.service - OpenSSH per-connection server daemon (10.0.0.1:53816). Feb 13 20:43:26.483935 sshd[3474]: Accepted publickey for core from 10.0.0.1 port 53816 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:26.485436 sshd[3474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:26.488919 systemd-logind[1417]: New session 47 of user core. Feb 13 20:43:26.504078 systemd[1]: Started session-47.scope - Session 47 of User core. Feb 13 20:43:26.610299 sshd[3474]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:26.613308 systemd[1]: sshd@46-10.0.0.8:22-10.0.0.1:53816.service: Deactivated successfully. Feb 13 20:43:26.615501 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 20:43:26.616226 systemd-logind[1417]: Session 47 logged out. Waiting for processes to exit. Feb 13 20:43:26.617054 systemd-logind[1417]: Removed session 47. Feb 13 20:43:27.362362 kubelet[2423]: E0213 20:43:27.362314 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:28.266282 kubelet[2423]: E0213 20:43:28.266231 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:31.266406 kubelet[2423]: E0213 20:43:31.266357 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:31.621502 systemd[1]: Started sshd@47-10.0.0.8:22-10.0.0.1:53822.service - OpenSSH per-connection server daemon (10.0.0.1:53822). Feb 13 20:43:31.653286 sshd[3489]: Accepted publickey for core from 10.0.0.1 port 53822 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:31.654413 sshd[3489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:31.658296 systemd-logind[1417]: New session 48 of user core. Feb 13 20:43:31.674151 systemd[1]: Started session-48.scope - Session 48 of User core. Feb 13 20:43:31.781102 sshd[3489]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:31.783610 systemd[1]: sshd@47-10.0.0.8:22-10.0.0.1:53822.service: Deactivated successfully. Feb 13 20:43:31.785676 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 20:43:31.787233 systemd-logind[1417]: Session 48 logged out. Waiting for processes to exit. Feb 13 20:43:31.788176 systemd-logind[1417]: Removed session 48. Feb 13 20:43:32.266483 kubelet[2423]: E0213 20:43:32.266451 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:32.267589 kubelet[2423]: E0213 20:43:32.267185 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:43:32.363401 kubelet[2423]: E0213 20:43:32.363356 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:36.791459 systemd[1]: Started sshd@48-10.0.0.8:22-10.0.0.1:47078.service - OpenSSH per-connection server daemon (10.0.0.1:47078). Feb 13 20:43:36.822745 sshd[3503]: Accepted publickey for core from 10.0.0.1 port 47078 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:36.823860 sshd[3503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:36.827212 systemd-logind[1417]: New session 49 of user core. Feb 13 20:43:36.839047 systemd[1]: Started session-49.scope - Session 49 of User core. Feb 13 20:43:36.946004 sshd[3503]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:36.949149 systemd[1]: sshd@48-10.0.0.8:22-10.0.0.1:47078.service: Deactivated successfully. Feb 13 20:43:36.950991 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 20:43:36.951557 systemd-logind[1417]: Session 49 logged out. Waiting for processes to exit. Feb 13 20:43:36.952321 systemd-logind[1417]: Removed session 49. Feb 13 20:43:37.364065 kubelet[2423]: E0213 20:43:37.364029 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:41.957650 systemd[1]: Started sshd@49-10.0.0.8:22-10.0.0.1:47080.service - OpenSSH per-connection server daemon (10.0.0.1:47080). Feb 13 20:43:41.989348 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 47080 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:41.990546 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:41.994197 systemd-logind[1417]: New session 50 of user core. Feb 13 20:43:42.006112 systemd[1]: Started session-50.scope - Session 50 of User core. Feb 13 20:43:42.113046 sshd[3520]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:42.116165 systemd[1]: sshd@49-10.0.0.8:22-10.0.0.1:47080.service: Deactivated successfully. Feb 13 20:43:42.118243 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 20:43:42.118961 systemd-logind[1417]: Session 50 logged out. Waiting for processes to exit. Feb 13 20:43:42.119765 systemd-logind[1417]: Removed session 50. Feb 13 20:43:42.365673 kubelet[2423]: E0213 20:43:42.365626 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:45.266196 kubelet[2423]: E0213 20:43:45.266157 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:47.123345 systemd[1]: Started sshd@50-10.0.0.8:22-10.0.0.1:48402.service - OpenSSH per-connection server daemon (10.0.0.1:48402). Feb 13 20:43:47.155486 sshd[3536]: Accepted publickey for core from 10.0.0.1 port 48402 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:47.156730 sshd[3536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:47.160183 systemd-logind[1417]: New session 51 of user core. Feb 13 20:43:47.173056 systemd[1]: Started session-51.scope - Session 51 of User core. Feb 13 20:43:47.267417 kubelet[2423]: E0213 20:43:47.267140 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:47.267731 kubelet[2423]: E0213 20:43:47.267676 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:43:47.280277 sshd[3536]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:47.283304 systemd[1]: sshd@50-10.0.0.8:22-10.0.0.1:48402.service: Deactivated successfully. Feb 13 20:43:47.285543 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 20:43:47.286211 systemd-logind[1417]: Session 51 logged out. Waiting for processes to exit. Feb 13 20:43:47.286956 systemd-logind[1417]: Removed session 51. Feb 13 20:43:47.366334 kubelet[2423]: E0213 20:43:47.366305 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:52.295416 systemd[1]: Started sshd@51-10.0.0.8:22-10.0.0.1:48410.service - OpenSSH per-connection server daemon (10.0.0.1:48410). Feb 13 20:43:52.327699 sshd[3551]: Accepted publickey for core from 10.0.0.1 port 48410 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:52.328872 sshd[3551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:52.332700 systemd-logind[1417]: New session 52 of user core. Feb 13 20:43:52.339040 systemd[1]: Started session-52.scope - Session 52 of User core. Feb 13 20:43:52.367423 kubelet[2423]: E0213 20:43:52.367383 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:52.448283 sshd[3551]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:52.451353 systemd[1]: sshd@51-10.0.0.8:22-10.0.0.1:48410.service: Deactivated successfully. Feb 13 20:43:52.453793 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 20:43:52.454512 systemd-logind[1417]: Session 52 logged out. Waiting for processes to exit. Feb 13 20:43:52.455414 systemd-logind[1417]: Removed session 52. Feb 13 20:43:57.265842 kubelet[2423]: E0213 20:43:57.265756 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:57.368294 kubelet[2423]: E0213 20:43:57.368227 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:57.461367 systemd[1]: Started sshd@52-10.0.0.8:22-10.0.0.1:33122.service - OpenSSH per-connection server daemon (10.0.0.1:33122). Feb 13 20:43:57.493057 sshd[3565]: Accepted publickey for core from 10.0.0.1 port 33122 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:57.494213 sshd[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:57.498123 systemd-logind[1417]: New session 53 of user core. Feb 13 20:43:57.508039 systemd[1]: Started session-53.scope - Session 53 of User core. Feb 13 20:43:57.619742 sshd[3565]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:57.622176 systemd[1]: sshd@52-10.0.0.8:22-10.0.0.1:33122.service: Deactivated successfully. Feb 13 20:43:57.623867 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 20:43:57.625322 systemd-logind[1417]: Session 53 logged out. Waiting for processes to exit. Feb 13 20:43:57.626623 systemd-logind[1417]: Removed session 53. Feb 13 20:43:59.266685 kubelet[2423]: E0213 20:43:59.266432 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:59.267572 kubelet[2423]: E0213 20:43:59.267269 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:44:02.369443 kubelet[2423]: E0213 20:44:02.369400 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:02.630489 systemd[1]: Started sshd@53-10.0.0.8:22-10.0.0.1:41922.service - OpenSSH per-connection server daemon (10.0.0.1:41922). Feb 13 20:44:02.662435 sshd[3580]: Accepted publickey for core from 10.0.0.1 port 41922 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:02.663560 sshd[3580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:02.667002 systemd-logind[1417]: New session 54 of user core. Feb 13 20:44:02.673042 systemd[1]: Started session-54.scope - Session 54 of User core. Feb 13 20:44:02.779814 sshd[3580]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:02.783047 systemd[1]: sshd@53-10.0.0.8:22-10.0.0.1:41922.service: Deactivated successfully. Feb 13 20:44:02.785089 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 20:44:02.785680 systemd-logind[1417]: Session 54 logged out. Waiting for processes to exit. Feb 13 20:44:02.786568 systemd-logind[1417]: Removed session 54. Feb 13 20:44:07.370823 kubelet[2423]: E0213 20:44:07.370769 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:07.791403 systemd[1]: Started sshd@54-10.0.0.8:22-10.0.0.1:41930.service - OpenSSH per-connection server daemon (10.0.0.1:41930). Feb 13 20:44:07.823464 sshd[3595]: Accepted publickey for core from 10.0.0.1 port 41930 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:07.824665 sshd[3595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:07.828085 systemd-logind[1417]: New session 55 of user core. Feb 13 20:44:07.838024 systemd[1]: Started session-55.scope - Session 55 of User core. Feb 13 20:44:07.945422 sshd[3595]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:07.948697 systemd[1]: sshd@54-10.0.0.8:22-10.0.0.1:41930.service: Deactivated successfully. Feb 13 20:44:07.950265 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 20:44:07.951673 systemd-logind[1417]: Session 55 logged out. Waiting for processes to exit. Feb 13 20:44:07.952589 systemd-logind[1417]: Removed session 55. Feb 13 20:44:12.372454 kubelet[2423]: E0213 20:44:12.372416 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:12.956439 systemd[1]: Started sshd@55-10.0.0.8:22-10.0.0.1:52860.service - OpenSSH per-connection server daemon (10.0.0.1:52860). Feb 13 20:44:12.988734 sshd[3609]: Accepted publickey for core from 10.0.0.1 port 52860 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:12.990131 sshd[3609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:12.993552 systemd-logind[1417]: New session 56 of user core. Feb 13 20:44:13.005101 systemd[1]: Started session-56.scope - Session 56 of User core. Feb 13 20:44:13.109016 sshd[3609]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:13.112445 systemd[1]: sshd@55-10.0.0.8:22-10.0.0.1:52860.service: Deactivated successfully. Feb 13 20:44:13.114033 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 20:44:13.115281 systemd-logind[1417]: Session 56 logged out. Waiting for processes to exit. Feb 13 20:44:13.116151 systemd-logind[1417]: Removed session 56. Feb 13 20:44:13.266468 kubelet[2423]: E0213 20:44:13.266156 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:13.267453 kubelet[2423]: E0213 20:44:13.267397 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:44:17.373459 kubelet[2423]: E0213 20:44:17.373411 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:18.119491 systemd[1]: Started sshd@56-10.0.0.8:22-10.0.0.1:52872.service - OpenSSH per-connection server daemon (10.0.0.1:52872). Feb 13 20:44:18.151674 sshd[3625]: Accepted publickey for core from 10.0.0.1 port 52872 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:18.152813 sshd[3625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:18.156463 systemd-logind[1417]: New session 57 of user core. Feb 13 20:44:18.168051 systemd[1]: Started session-57.scope - Session 57 of User core. Feb 13 20:44:18.276037 sshd[3625]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:18.279306 systemd[1]: sshd@56-10.0.0.8:22-10.0.0.1:52872.service: Deactivated successfully. Feb 13 20:44:18.281580 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 20:44:18.282652 systemd-logind[1417]: Session 57 logged out. Waiting for processes to exit. Feb 13 20:44:18.284051 systemd-logind[1417]: Removed session 57. Feb 13 20:44:22.374319 kubelet[2423]: E0213 20:44:22.374285 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:23.286723 systemd[1]: Started sshd@57-10.0.0.8:22-10.0.0.1:56156.service - OpenSSH per-connection server daemon (10.0.0.1:56156). Feb 13 20:44:23.319541 sshd[3640]: Accepted publickey for core from 10.0.0.1 port 56156 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:23.320817 sshd[3640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:23.324524 systemd-logind[1417]: New session 58 of user core. Feb 13 20:44:23.334027 systemd[1]: Started session-58.scope - Session 58 of User core. Feb 13 20:44:23.440055 sshd[3640]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:23.443253 systemd[1]: sshd@57-10.0.0.8:22-10.0.0.1:56156.service: Deactivated successfully. Feb 13 20:44:23.444999 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 20:44:23.445571 systemd-logind[1417]: Session 58 logged out. Waiting for processes to exit. Feb 13 20:44:23.446349 systemd-logind[1417]: Removed session 58. Feb 13 20:44:27.375220 kubelet[2423]: E0213 20:44:27.375172 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:28.266375 kubelet[2423]: E0213 20:44:28.266186 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:28.266882 kubelet[2423]: E0213 20:44:28.266841 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:44:28.450399 systemd[1]: Started sshd@58-10.0.0.8:22-10.0.0.1:56172.service - OpenSSH per-connection server daemon (10.0.0.1:56172). Feb 13 20:44:28.481996 sshd[3655]: Accepted publickey for core from 10.0.0.1 port 56172 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:28.483258 sshd[3655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:28.486792 systemd-logind[1417]: New session 59 of user core. Feb 13 20:44:28.493036 systemd[1]: Started session-59.scope - Session 59 of User core. Feb 13 20:44:28.597437 sshd[3655]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:28.600538 systemd[1]: sshd@58-10.0.0.8:22-10.0.0.1:56172.service: Deactivated successfully. Feb 13 20:44:28.602786 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 20:44:28.603639 systemd-logind[1417]: Session 59 logged out. Waiting for processes to exit. Feb 13 20:44:28.604439 systemd-logind[1417]: Removed session 59. Feb 13 20:44:32.376153 kubelet[2423]: E0213 20:44:32.376106 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:33.608455 systemd[1]: Started sshd@59-10.0.0.8:22-10.0.0.1:49528.service - OpenSSH per-connection server daemon (10.0.0.1:49528). Feb 13 20:44:33.640095 sshd[3669]: Accepted publickey for core from 10.0.0.1 port 49528 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:33.641279 sshd[3669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:33.644598 systemd-logind[1417]: New session 60 of user core. Feb 13 20:44:33.659034 systemd[1]: Started session-60.scope - Session 60 of User core. Feb 13 20:44:33.767880 sshd[3669]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:33.771287 systemd[1]: sshd@59-10.0.0.8:22-10.0.0.1:49528.service: Deactivated successfully. Feb 13 20:44:33.772942 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 20:44:33.773472 systemd-logind[1417]: Session 60 logged out. Waiting for processes to exit. Feb 13 20:44:33.774277 systemd-logind[1417]: Removed session 60. Feb 13 20:44:37.377555 kubelet[2423]: E0213 20:44:37.377523 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:38.778721 systemd[1]: Started sshd@60-10.0.0.8:22-10.0.0.1:49530.service - OpenSSH per-connection server daemon (10.0.0.1:49530). Feb 13 20:44:38.810741 sshd[3686]: Accepted publickey for core from 10.0.0.1 port 49530 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:38.811963 sshd[3686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:38.815534 systemd-logind[1417]: New session 61 of user core. Feb 13 20:44:38.825040 systemd[1]: Started session-61.scope - Session 61 of User core. Feb 13 20:44:38.932119 sshd[3686]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:38.935373 systemd[1]: sshd@60-10.0.0.8:22-10.0.0.1:49530.service: Deactivated successfully. Feb 13 20:44:38.937068 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 20:44:38.937622 systemd-logind[1417]: Session 61 logged out. Waiting for processes to exit. Feb 13 20:44:38.938393 systemd-logind[1417]: Removed session 61. Feb 13 20:44:40.266239 kubelet[2423]: E0213 20:44:40.266176 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:40.267030 kubelet[2423]: E0213 20:44:40.266755 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:44:42.378735 kubelet[2423]: E0213 20:44:42.378698 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:43.946587 systemd[1]: Started sshd@61-10.0.0.8:22-10.0.0.1:59150.service - OpenSSH per-connection server daemon (10.0.0.1:59150). Feb 13 20:44:43.982401 sshd[3703]: Accepted publickey for core from 10.0.0.1 port 59150 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:43.983576 sshd[3703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:43.987719 systemd-logind[1417]: New session 62 of user core. Feb 13 20:44:43.997080 systemd[1]: Started session-62.scope - Session 62 of User core. Feb 13 20:44:44.102470 sshd[3703]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:44.105707 systemd[1]: sshd@61-10.0.0.8:22-10.0.0.1:59150.service: Deactivated successfully. Feb 13 20:44:44.107994 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 20:44:44.109562 systemd-logind[1417]: Session 62 logged out. Waiting for processes to exit. Feb 13 20:44:44.110824 systemd-logind[1417]: Removed session 62. Feb 13 20:44:47.266402 kubelet[2423]: E0213 20:44:47.266362 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:47.380157 kubelet[2423]: E0213 20:44:47.380125 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:48.265668 kubelet[2423]: E0213 20:44:48.265632 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:49.113859 systemd[1]: Started sshd@62-10.0.0.8:22-10.0.0.1:59154.service - OpenSSH per-connection server daemon (10.0.0.1:59154). Feb 13 20:44:49.146048 sshd[3718]: Accepted publickey for core from 10.0.0.1 port 59154 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:49.147257 sshd[3718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:49.151311 systemd-logind[1417]: New session 63 of user core. Feb 13 20:44:49.161109 systemd[1]: Started session-63.scope - Session 63 of User core. Feb 13 20:44:49.267576 sshd[3718]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:49.270980 systemd[1]: sshd@62-10.0.0.8:22-10.0.0.1:59154.service: Deactivated successfully. Feb 13 20:44:49.272568 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 20:44:49.274118 systemd-logind[1417]: Session 63 logged out. Waiting for processes to exit. Feb 13 20:44:49.275065 systemd-logind[1417]: Removed session 63. Feb 13 20:44:52.381541 kubelet[2423]: E0213 20:44:52.381506 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:53.266206 kubelet[2423]: E0213 20:44:53.266169 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:53.267930 kubelet[2423]: E0213 20:44:53.267819 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:44:54.278482 systemd[1]: Started sshd@63-10.0.0.8:22-10.0.0.1:33986.service - OpenSSH per-connection server daemon (10.0.0.1:33986). Feb 13 20:44:54.310652 sshd[3735]: Accepted publickey for core from 10.0.0.1 port 33986 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:54.311965 sshd[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:54.315856 systemd-logind[1417]: New session 64 of user core. Feb 13 20:44:54.326106 systemd[1]: Started session-64.scope - Session 64 of User core. Feb 13 20:44:54.437245 sshd[3735]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:54.440475 systemd[1]: sshd@63-10.0.0.8:22-10.0.0.1:33986.service: Deactivated successfully. Feb 13 20:44:54.442255 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 20:44:54.442943 systemd-logind[1417]: Session 64 logged out. Waiting for processes to exit. Feb 13 20:44:54.444241 systemd-logind[1417]: Removed session 64. Feb 13 20:44:55.266237 kubelet[2423]: E0213 20:44:55.266135 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:57.382438 kubelet[2423]: E0213 20:44:57.382395 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:59.448473 systemd[1]: Started sshd@64-10.0.0.8:22-10.0.0.1:34002.service - OpenSSH per-connection server daemon (10.0.0.1:34002). Feb 13 20:44:59.480121 sshd[3750]: Accepted publickey for core from 10.0.0.1 port 34002 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:59.481261 sshd[3750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:59.484674 systemd-logind[1417]: New session 65 of user core. Feb 13 20:44:59.493033 systemd[1]: Started session-65.scope - Session 65 of User core. Feb 13 20:44:59.600099 sshd[3750]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:59.603646 systemd[1]: sshd@64-10.0.0.8:22-10.0.0.1:34002.service: Deactivated successfully. Feb 13 20:44:59.605375 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 20:44:59.605952 systemd-logind[1417]: Session 65 logged out. Waiting for processes to exit. Feb 13 20:44:59.606884 systemd-logind[1417]: Removed session 65. Feb 13 20:45:02.383996 kubelet[2423]: E0213 20:45:02.383888 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:04.610420 systemd[1]: Started sshd@65-10.0.0.8:22-10.0.0.1:58820.service - OpenSSH per-connection server daemon (10.0.0.1:58820). Feb 13 20:45:04.642389 sshd[3765]: Accepted publickey for core from 10.0.0.1 port 58820 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:04.643568 sshd[3765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:04.647148 systemd-logind[1417]: New session 66 of user core. Feb 13 20:45:04.657080 systemd[1]: Started session-66.scope - Session 66 of User core. Feb 13 20:45:04.763959 sshd[3765]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:04.767391 systemd[1]: sshd@65-10.0.0.8:22-10.0.0.1:58820.service: Deactivated successfully. Feb 13 20:45:04.769664 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 20:45:04.770748 systemd-logind[1417]: Session 66 logged out. Waiting for processes to exit. Feb 13 20:45:04.771939 systemd-logind[1417]: Removed session 66. Feb 13 20:45:06.265693 kubelet[2423]: E0213 20:45:06.265664 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:06.266208 kubelet[2423]: E0213 20:45:06.265820 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:06.266346 kubelet[2423]: E0213 20:45:06.266300 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:45:07.385236 kubelet[2423]: E0213 20:45:07.385202 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:09.774721 systemd[1]: Started sshd@66-10.0.0.8:22-10.0.0.1:58822.service - OpenSSH per-connection server daemon (10.0.0.1:58822). Feb 13 20:45:09.807257 sshd[3781]: Accepted publickey for core from 10.0.0.1 port 58822 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:09.808465 sshd[3781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:09.812657 systemd-logind[1417]: New session 67 of user core. Feb 13 20:45:09.821069 systemd[1]: Started session-67.scope - Session 67 of User core. Feb 13 20:45:09.929426 sshd[3781]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:09.932683 systemd[1]: sshd@66-10.0.0.8:22-10.0.0.1:58822.service: Deactivated successfully. Feb 13 20:45:09.934521 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 20:45:09.935227 systemd-logind[1417]: Session 67 logged out. Waiting for processes to exit. Feb 13 20:45:09.936170 systemd-logind[1417]: Removed session 67. Feb 13 20:45:12.386670 kubelet[2423]: E0213 20:45:12.386614 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:14.940507 systemd[1]: Started sshd@67-10.0.0.8:22-10.0.0.1:34838.service - OpenSSH per-connection server daemon (10.0.0.1:34838). Feb 13 20:45:14.972663 sshd[3798]: Accepted publickey for core from 10.0.0.1 port 34838 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:14.973932 sshd[3798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:14.978132 systemd-logind[1417]: New session 68 of user core. Feb 13 20:45:14.995089 systemd[1]: Started session-68.scope - Session 68 of User core. Feb 13 20:45:15.106448 sshd[3798]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:15.109736 systemd[1]: sshd@67-10.0.0.8:22-10.0.0.1:34838.service: Deactivated successfully. Feb 13 20:45:15.111353 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 20:45:15.111969 systemd-logind[1417]: Session 68 logged out. Waiting for processes to exit. Feb 13 20:45:15.112754 systemd-logind[1417]: Removed session 68. Feb 13 20:45:17.387356 kubelet[2423]: E0213 20:45:17.387306 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:20.117483 systemd[1]: Started sshd@68-10.0.0.8:22-10.0.0.1:34854.service - OpenSSH per-connection server daemon (10.0.0.1:34854). Feb 13 20:45:20.151210 sshd[3813]: Accepted publickey for core from 10.0.0.1 port 34854 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:20.152434 sshd[3813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:20.155950 systemd-logind[1417]: New session 69 of user core. Feb 13 20:45:20.162115 systemd[1]: Started session-69.scope - Session 69 of User core. Feb 13 20:45:20.266081 kubelet[2423]: E0213 20:45:20.266047 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:20.267462 containerd[1426]: time="2025-02-13T20:45:20.267431747Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 20:45:20.269667 sshd[3813]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:20.273386 systemd[1]: sshd@68-10.0.0.8:22-10.0.0.1:34854.service: Deactivated successfully. Feb 13 20:45:20.275677 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 20:45:20.276880 systemd-logind[1417]: Session 69 logged out. Waiting for processes to exit. Feb 13 20:45:20.278104 systemd-logind[1417]: Removed session 69. Feb 13 20:45:21.635410 containerd[1426]: time="2025-02-13T20:45:21.635325117Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:45:21.635410 containerd[1426]: time="2025-02-13T20:45:21.635352919Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=13094" Feb 13 20:45:21.635828 kubelet[2423]: E0213 20:45:21.635532 2423 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:45:21.635828 kubelet[2423]: E0213 20:45:21.635576 2423 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:45:21.636148 kubelet[2423]: E0213 20:45:21.635664 2423 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bk9w5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-pdsbm_kube-flannel(e24b7648-6eb5-485b-97d3-9e8d6d3764bf): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:45:21.637386 kubelet[2423]: E0213 20:45:21.637339 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:45:22.388653 kubelet[2423]: E0213 20:45:22.388610 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:25.281457 systemd[1]: Started sshd@69-10.0.0.8:22-10.0.0.1:58136.service - OpenSSH per-connection server daemon (10.0.0.1:58136). Feb 13 20:45:25.315032 sshd[3827]: Accepted publickey for core from 10.0.0.1 port 58136 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:25.316576 sshd[3827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:25.319936 systemd-logind[1417]: New session 70 of user core. Feb 13 20:45:25.332052 systemd[1]: Started session-70.scope - Session 70 of User core. Feb 13 20:45:25.442547 sshd[3827]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:25.446099 systemd[1]: sshd@69-10.0.0.8:22-10.0.0.1:58136.service: Deactivated successfully. Feb 13 20:45:25.448686 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 20:45:25.449765 systemd-logind[1417]: Session 70 logged out. Waiting for processes to exit. Feb 13 20:45:25.450608 systemd-logind[1417]: Removed session 70. Feb 13 20:45:27.389624 kubelet[2423]: E0213 20:45:27.389581 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:30.453467 systemd[1]: Started sshd@70-10.0.0.8:22-10.0.0.1:58148.service - OpenSSH per-connection server daemon (10.0.0.1:58148). Feb 13 20:45:30.485231 sshd[3842]: Accepted publickey for core from 10.0.0.1 port 58148 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:30.486368 sshd[3842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:30.489794 systemd-logind[1417]: New session 71 of user core. Feb 13 20:45:30.499029 systemd[1]: Started session-71.scope - Session 71 of User core. Feb 13 20:45:30.604640 sshd[3842]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:30.607684 systemd[1]: sshd@70-10.0.0.8:22-10.0.0.1:58148.service: Deactivated successfully. Feb 13 20:45:30.609272 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 20:45:30.610391 systemd-logind[1417]: Session 71 logged out. Waiting for processes to exit. Feb 13 20:45:30.611236 systemd-logind[1417]: Removed session 71. Feb 13 20:45:32.390773 kubelet[2423]: E0213 20:45:32.390734 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:33.265875 kubelet[2423]: E0213 20:45:33.265681 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:33.266308 kubelet[2423]: E0213 20:45:33.266278 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:45:35.615401 systemd[1]: Started sshd@71-10.0.0.8:22-10.0.0.1:46290.service - OpenSSH per-connection server daemon (10.0.0.1:46290). Feb 13 20:45:35.647320 sshd[3857]: Accepted publickey for core from 10.0.0.1 port 46290 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:35.648487 sshd[3857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:35.652566 systemd-logind[1417]: New session 72 of user core. Feb 13 20:45:35.663024 systemd[1]: Started session-72.scope - Session 72 of User core. Feb 13 20:45:35.770165 sshd[3857]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:35.773291 systemd[1]: sshd@71-10.0.0.8:22-10.0.0.1:46290.service: Deactivated successfully. Feb 13 20:45:35.775764 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 20:45:35.776453 systemd-logind[1417]: Session 72 logged out. Waiting for processes to exit. Feb 13 20:45:35.777280 systemd-logind[1417]: Removed session 72. Feb 13 20:45:37.391960 kubelet[2423]: E0213 20:45:37.391924 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:40.780473 systemd[1]: Started sshd@72-10.0.0.8:22-10.0.0.1:46306.service - OpenSSH per-connection server daemon (10.0.0.1:46306). Feb 13 20:45:40.812233 sshd[3873]: Accepted publickey for core from 10.0.0.1 port 46306 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:40.813394 sshd[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:40.816803 systemd-logind[1417]: New session 73 of user core. Feb 13 20:45:40.825100 systemd[1]: Started session-73.scope - Session 73 of User core. Feb 13 20:45:40.931686 sshd[3873]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:40.934725 systemd[1]: sshd@72-10.0.0.8:22-10.0.0.1:46306.service: Deactivated successfully. Feb 13 20:45:40.936329 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 20:45:40.936878 systemd-logind[1417]: Session 73 logged out. Waiting for processes to exit. Feb 13 20:45:40.937616 systemd-logind[1417]: Removed session 73. Feb 13 20:45:42.392851 kubelet[2423]: E0213 20:45:42.392813 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:45.266213 kubelet[2423]: E0213 20:45:45.266180 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:45.267318 kubelet[2423]: E0213 20:45:45.267271 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:45:45.942358 systemd[1]: Started sshd@73-10.0.0.8:22-10.0.0.1:43664.service - OpenSSH per-connection server daemon (10.0.0.1:43664). Feb 13 20:45:45.974701 sshd[3889]: Accepted publickey for core from 10.0.0.1 port 43664 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:45.975843 sshd[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:45.979306 systemd-logind[1417]: New session 74 of user core. Feb 13 20:45:45.989039 systemd[1]: Started session-74.scope - Session 74 of User core. Feb 13 20:45:46.094375 sshd[3889]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:46.097603 systemd[1]: sshd@73-10.0.0.8:22-10.0.0.1:43664.service: Deactivated successfully. Feb 13 20:45:46.099421 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 20:45:46.100053 systemd-logind[1417]: Session 74 logged out. Waiting for processes to exit. Feb 13 20:45:46.100806 systemd-logind[1417]: Removed session 74. Feb 13 20:45:47.393453 kubelet[2423]: E0213 20:45:47.393416 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:51.105348 systemd[1]: Started sshd@74-10.0.0.8:22-10.0.0.1:43676.service - OpenSSH per-connection server daemon (10.0.0.1:43676). Feb 13 20:45:51.137182 sshd[3904]: Accepted publickey for core from 10.0.0.1 port 43676 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:51.138393 sshd[3904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:51.142218 systemd-logind[1417]: New session 75 of user core. Feb 13 20:45:51.155034 systemd[1]: Started session-75.scope - Session 75 of User core. Feb 13 20:45:51.261589 sshd[3904]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:51.264618 systemd[1]: sshd@74-10.0.0.8:22-10.0.0.1:43676.service: Deactivated successfully. Feb 13 20:45:51.266192 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 20:45:51.267353 systemd-logind[1417]: Session 75 logged out. Waiting for processes to exit. Feb 13 20:45:51.268179 systemd-logind[1417]: Removed session 75. Feb 13 20:45:52.394548 kubelet[2423]: E0213 20:45:52.394507 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:53.266399 kubelet[2423]: E0213 20:45:53.266306 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:56.272342 systemd[1]: Started sshd@75-10.0.0.8:22-10.0.0.1:38168.service - OpenSSH per-connection server daemon (10.0.0.1:38168). Feb 13 20:45:56.304988 sshd[3919]: Accepted publickey for core from 10.0.0.1 port 38168 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:56.306234 sshd[3919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:56.310068 systemd-logind[1417]: New session 76 of user core. Feb 13 20:45:56.320088 systemd[1]: Started session-76.scope - Session 76 of User core. Feb 13 20:45:56.426480 sshd[3919]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:56.429646 systemd[1]: sshd@75-10.0.0.8:22-10.0.0.1:38168.service: Deactivated successfully. Feb 13 20:45:56.431187 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 20:45:56.432451 systemd-logind[1417]: Session 76 logged out. Waiting for processes to exit. Feb 13 20:45:56.433278 systemd-logind[1417]: Removed session 76. Feb 13 20:45:57.267064 kubelet[2423]: E0213 20:45:57.266576 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:57.267405 kubelet[2423]: E0213 20:45:57.267356 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:45:57.395794 kubelet[2423]: E0213 20:45:57.395741 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:01.437480 systemd[1]: Started sshd@76-10.0.0.8:22-10.0.0.1:38178.service - OpenSSH per-connection server daemon (10.0.0.1:38178). Feb 13 20:46:01.469304 sshd[3933]: Accepted publickey for core from 10.0.0.1 port 38178 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:01.470471 sshd[3933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:01.474179 systemd-logind[1417]: New session 77 of user core. Feb 13 20:46:01.481034 systemd[1]: Started session-77.scope - Session 77 of User core. Feb 13 20:46:01.588740 sshd[3933]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:01.591844 systemd[1]: sshd@76-10.0.0.8:22-10.0.0.1:38178.service: Deactivated successfully. Feb 13 20:46:01.594367 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 20:46:01.595015 systemd-logind[1417]: Session 77 logged out. Waiting for processes to exit. Feb 13 20:46:01.595863 systemd-logind[1417]: Removed session 77. Feb 13 20:46:02.396797 kubelet[2423]: E0213 20:46:02.396754 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:06.599484 systemd[1]: Started sshd@77-10.0.0.8:22-10.0.0.1:37452.service - OpenSSH per-connection server daemon (10.0.0.1:37452). Feb 13 20:46:06.631658 sshd[3948]: Accepted publickey for core from 10.0.0.1 port 37452 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:06.632870 sshd[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:06.636378 systemd-logind[1417]: New session 78 of user core. Feb 13 20:46:06.655047 systemd[1]: Started session-78.scope - Session 78 of User core. Feb 13 20:46:06.763060 sshd[3948]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:06.779470 systemd[1]: sshd@77-10.0.0.8:22-10.0.0.1:37452.service: Deactivated successfully. Feb 13 20:46:06.780895 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 20:46:06.782189 systemd-logind[1417]: Session 78 logged out. Waiting for processes to exit. Feb 13 20:46:06.795352 systemd[1]: Started sshd@78-10.0.0.8:22-10.0.0.1:37462.service - OpenSSH per-connection server daemon (10.0.0.1:37462). Feb 13 20:46:06.796963 systemd-logind[1417]: Removed session 78. Feb 13 20:46:06.823397 sshd[3963]: Accepted publickey for core from 10.0.0.1 port 37462 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:06.824535 sshd[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:06.828940 systemd-logind[1417]: New session 79 of user core. Feb 13 20:46:06.840094 systemd[1]: Started session-79.scope - Session 79 of User core. Feb 13 20:46:07.007014 sshd[3963]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:07.019358 systemd[1]: sshd@78-10.0.0.8:22-10.0.0.1:37462.service: Deactivated successfully. Feb 13 20:46:07.020723 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 20:46:07.021920 systemd-logind[1417]: Session 79 logged out. Waiting for processes to exit. Feb 13 20:46:07.023098 systemd[1]: Started sshd@79-10.0.0.8:22-10.0.0.1:37466.service - OpenSSH per-connection server daemon (10.0.0.1:37466). Feb 13 20:46:07.023942 systemd-logind[1417]: Removed session 79. Feb 13 20:46:07.056185 sshd[3978]: Accepted publickey for core from 10.0.0.1 port 37466 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:07.057368 sshd[3978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:07.061528 systemd-logind[1417]: New session 80 of user core. Feb 13 20:46:07.072041 systemd[1]: Started session-80.scope - Session 80 of User core. Feb 13 20:46:07.398240 kubelet[2423]: E0213 20:46:07.398205 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:07.761104 sshd[3978]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:07.772565 systemd[1]: sshd@79-10.0.0.8:22-10.0.0.1:37466.service: Deactivated successfully. Feb 13 20:46:07.775993 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 20:46:07.778176 systemd-logind[1417]: Session 80 logged out. Waiting for processes to exit. Feb 13 20:46:07.783357 systemd[1]: Started sshd@80-10.0.0.8:22-10.0.0.1:37474.service - OpenSSH per-connection server daemon (10.0.0.1:37474). Feb 13 20:46:07.784529 systemd-logind[1417]: Removed session 80. Feb 13 20:46:07.815387 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 37474 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:07.816736 sshd[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:07.821106 systemd-logind[1417]: New session 81 of user core. Feb 13 20:46:07.828062 systemd[1]: Started session-81.scope - Session 81 of User core. Feb 13 20:46:08.042297 sshd[3998]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:08.050971 systemd[1]: sshd@80-10.0.0.8:22-10.0.0.1:37474.service: Deactivated successfully. Feb 13 20:46:08.052464 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 20:46:08.054390 systemd-logind[1417]: Session 81 logged out. Waiting for processes to exit. Feb 13 20:46:08.060171 systemd[1]: Started sshd@81-10.0.0.8:22-10.0.0.1:37490.service - OpenSSH per-connection server daemon (10.0.0.1:37490). Feb 13 20:46:08.061290 systemd-logind[1417]: Removed session 81. Feb 13 20:46:08.089232 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 37490 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:08.090570 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:08.094432 systemd-logind[1417]: New session 82 of user core. Feb 13 20:46:08.110089 systemd[1]: Started session-82.scope - Session 82 of User core. Feb 13 20:46:08.224335 sshd[4012]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:08.227589 systemd[1]: sshd@81-10.0.0.8:22-10.0.0.1:37490.service: Deactivated successfully. Feb 13 20:46:08.229991 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 20:46:08.230787 systemd-logind[1417]: Session 82 logged out. Waiting for processes to exit. Feb 13 20:46:08.231663 systemd-logind[1417]: Removed session 82. Feb 13 20:46:08.266165 kubelet[2423]: E0213 20:46:08.266137 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:08.266296 kubelet[2423]: E0213 20:46:08.266244 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:08.266911 kubelet[2423]: E0213 20:46:08.266832 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:46:12.399921 kubelet[2423]: E0213 20:46:12.399863 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:13.238474 systemd[1]: Started sshd@82-10.0.0.8:22-10.0.0.1:58504.service - OpenSSH per-connection server daemon (10.0.0.1:58504). Feb 13 20:46:13.270380 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 58504 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:13.271565 sshd[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:13.274985 systemd-logind[1417]: New session 83 of user core. Feb 13 20:46:13.283059 systemd[1]: Started session-83.scope - Session 83 of User core. Feb 13 20:46:13.388319 sshd[4027]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:13.391420 systemd[1]: sshd@82-10.0.0.8:22-10.0.0.1:58504.service: Deactivated successfully. Feb 13 20:46:13.393025 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 20:46:13.393561 systemd-logind[1417]: Session 83 logged out. Waiting for processes to exit. Feb 13 20:46:13.394387 systemd-logind[1417]: Removed session 83. Feb 13 20:46:16.266332 kubelet[2423]: E0213 20:46:16.266295 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:17.400749 kubelet[2423]: E0213 20:46:17.400716 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:18.265935 kubelet[2423]: E0213 20:46:18.265827 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:18.399443 systemd[1]: Started sshd@83-10.0.0.8:22-10.0.0.1:58512.service - OpenSSH per-connection server daemon (10.0.0.1:58512). Feb 13 20:46:18.430943 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 58512 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:18.432247 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:18.435971 systemd-logind[1417]: New session 84 of user core. Feb 13 20:46:18.446081 systemd[1]: Started session-84.scope - Session 84 of User core. Feb 13 20:46:18.551116 sshd[4043]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:18.554208 systemd[1]: sshd@83-10.0.0.8:22-10.0.0.1:58512.service: Deactivated successfully. Feb 13 20:46:18.555860 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 20:46:18.557095 systemd-logind[1417]: Session 84 logged out. Waiting for processes to exit. Feb 13 20:46:18.558043 systemd-logind[1417]: Removed session 84. Feb 13 20:46:20.266093 kubelet[2423]: E0213 20:46:20.266053 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:20.266093 kubelet[2423]: E0213 20:46:20.266602 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:46:22.401971 kubelet[2423]: E0213 20:46:22.401935 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:23.562406 systemd[1]: Started sshd@84-10.0.0.8:22-10.0.0.1:32956.service - OpenSSH per-connection server daemon (10.0.0.1:32956). Feb 13 20:46:23.596476 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 32956 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:23.597627 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:23.601788 systemd-logind[1417]: New session 85 of user core. Feb 13 20:46:23.623043 systemd[1]: Started session-85.scope - Session 85 of User core. Feb 13 20:46:23.727861 sshd[4058]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:23.731511 systemd[1]: sshd@84-10.0.0.8:22-10.0.0.1:32956.service: Deactivated successfully. Feb 13 20:46:23.733246 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 20:46:23.734404 systemd-logind[1417]: Session 85 logged out. Waiting for processes to exit. Feb 13 20:46:23.735227 systemd-logind[1417]: Removed session 85. Feb 13 20:46:27.403223 kubelet[2423]: E0213 20:46:27.403185 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:28.738527 systemd[1]: Started sshd@85-10.0.0.8:22-10.0.0.1:32964.service - OpenSSH per-connection server daemon (10.0.0.1:32964). Feb 13 20:46:28.770205 sshd[4073]: Accepted publickey for core from 10.0.0.1 port 32964 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:28.771426 sshd[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:28.775043 systemd-logind[1417]: New session 86 of user core. Feb 13 20:46:28.785564 systemd[1]: Started session-86.scope - Session 86 of User core. Feb 13 20:46:28.887470 sshd[4073]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:28.890495 systemd[1]: sshd@85-10.0.0.8:22-10.0.0.1:32964.service: Deactivated successfully. Feb 13 20:46:28.892768 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 20:46:28.893703 systemd-logind[1417]: Session 86 logged out. Waiting for processes to exit. Feb 13 20:46:28.894757 systemd-logind[1417]: Removed session 86. Feb 13 20:46:32.404498 kubelet[2423]: E0213 20:46:32.404409 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:33.898710 systemd[1]: Started sshd@86-10.0.0.8:22-10.0.0.1:59646.service - OpenSSH per-connection server daemon (10.0.0.1:59646). Feb 13 20:46:33.930820 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 59646 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:33.932036 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:33.935765 systemd-logind[1417]: New session 87 of user core. Feb 13 20:46:33.955080 systemd[1]: Started session-87.scope - Session 87 of User core. Feb 13 20:46:34.059372 sshd[4088]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:34.061891 systemd[1]: sshd@86-10.0.0.8:22-10.0.0.1:59646.service: Deactivated successfully. Feb 13 20:46:34.063543 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 20:46:34.064863 systemd-logind[1417]: Session 87 logged out. Waiting for processes to exit. Feb 13 20:46:34.065715 systemd-logind[1417]: Removed session 87. Feb 13 20:46:34.266089 kubelet[2423]: E0213 20:46:34.265978 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:34.266892 kubelet[2423]: E0213 20:46:34.266857 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:46:37.405979 kubelet[2423]: E0213 20:46:37.405942 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:39.070516 systemd[1]: Started sshd@87-10.0.0.8:22-10.0.0.1:59660.service - OpenSSH per-connection server daemon (10.0.0.1:59660). Feb 13 20:46:39.102432 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 59660 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:39.103619 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:39.107060 systemd-logind[1417]: New session 88 of user core. Feb 13 20:46:39.116050 systemd[1]: Started session-88.scope - Session 88 of User core. Feb 13 20:46:39.221281 sshd[4105]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:39.224587 systemd[1]: sshd@87-10.0.0.8:22-10.0.0.1:59660.service: Deactivated successfully. Feb 13 20:46:39.227116 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 20:46:39.228000 systemd-logind[1417]: Session 88 logged out. Waiting for processes to exit. Feb 13 20:46:39.228863 systemd-logind[1417]: Removed session 88. Feb 13 20:46:42.406770 kubelet[2423]: E0213 20:46:42.406727 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:44.235639 systemd[1]: Started sshd@88-10.0.0.8:22-10.0.0.1:56976.service - OpenSSH per-connection server daemon (10.0.0.1:56976). Feb 13 20:46:44.268120 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 56976 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:44.269359 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:44.272616 systemd-logind[1417]: New session 89 of user core. Feb 13 20:46:44.280031 systemd[1]: Started session-89.scope - Session 89 of User core. Feb 13 20:46:44.385023 sshd[4121]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:44.388048 systemd-logind[1417]: Session 89 logged out. Waiting for processes to exit. Feb 13 20:46:44.388207 systemd[1]: sshd@88-10.0.0.8:22-10.0.0.1:56976.service: Deactivated successfully. Feb 13 20:46:44.389695 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 20:46:44.391217 systemd-logind[1417]: Removed session 89. Feb 13 20:46:47.266157 kubelet[2423]: E0213 20:46:47.266115 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:47.266872 kubelet[2423]: E0213 20:46:47.266824 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:46:47.408178 kubelet[2423]: E0213 20:46:47.408139 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:49.395487 systemd[1]: Started sshd@89-10.0.0.8:22-10.0.0.1:56988.service - OpenSSH per-connection server daemon (10.0.0.1:56988). Feb 13 20:46:49.427809 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 56988 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:49.429163 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:49.433205 systemd-logind[1417]: New session 90 of user core. Feb 13 20:46:49.445056 systemd[1]: Started session-90.scope - Session 90 of User core. Feb 13 20:46:49.547097 sshd[4135]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:49.550246 systemd[1]: sshd@89-10.0.0.8:22-10.0.0.1:56988.service: Deactivated successfully. Feb 13 20:46:49.551956 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 20:46:49.552556 systemd-logind[1417]: Session 90 logged out. Waiting for processes to exit. Feb 13 20:46:49.553592 systemd-logind[1417]: Removed session 90. Feb 13 20:46:52.409124 kubelet[2423]: E0213 20:46:52.409064 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:54.557502 systemd[1]: Started sshd@90-10.0.0.8:22-10.0.0.1:52292.service - OpenSSH per-connection server daemon (10.0.0.1:52292). Feb 13 20:46:54.590138 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 52292 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:54.591339 sshd[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:54.595372 systemd-logind[1417]: New session 91 of user core. Feb 13 20:46:54.602040 systemd[1]: Started session-91.scope - Session 91 of User core. Feb 13 20:46:54.708289 sshd[4149]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:54.711406 systemd[1]: sshd@90-10.0.0.8:22-10.0.0.1:52292.service: Deactivated successfully. Feb 13 20:46:54.712993 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 20:46:54.713532 systemd-logind[1417]: Session 91 logged out. Waiting for processes to exit. Feb 13 20:46:54.714317 systemd-logind[1417]: Removed session 91. Feb 13 20:46:57.410333 kubelet[2423]: E0213 20:46:57.410284 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:59.266473 kubelet[2423]: E0213 20:46:59.266372 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:59.267089 kubelet[2423]: E0213 20:46:59.266994 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:46:59.719357 systemd[1]: Started sshd@91-10.0.0.8:22-10.0.0.1:52296.service - OpenSSH per-connection server daemon (10.0.0.1:52296). Feb 13 20:46:59.751202 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 52296 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:59.752322 sshd[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:59.755970 systemd-logind[1417]: New session 92 of user core. Feb 13 20:46:59.764080 systemd[1]: Started session-92.scope - Session 92 of User core. Feb 13 20:46:59.868744 sshd[4165]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:59.872210 systemd[1]: sshd@91-10.0.0.8:22-10.0.0.1:52296.service: Deactivated successfully. Feb 13 20:46:59.873875 systemd[1]: session-92.scope: Deactivated successfully. Feb 13 20:46:59.875399 systemd-logind[1417]: Session 92 logged out. Waiting for processes to exit. Feb 13 20:46:59.876405 systemd-logind[1417]: Removed session 92. Feb 13 20:47:02.411564 kubelet[2423]: E0213 20:47:02.411506 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:04.883740 systemd[1]: Started sshd@92-10.0.0.8:22-10.0.0.1:56764.service - OpenSSH per-connection server daemon (10.0.0.1:56764). Feb 13 20:47:04.915445 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 56764 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:04.916632 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:04.922952 systemd-logind[1417]: New session 93 of user core. Feb 13 20:47:04.934238 systemd[1]: Started session-93.scope - Session 93 of User core. Feb 13 20:47:05.040124 sshd[4180]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:05.043331 systemd[1]: sshd@92-10.0.0.8:22-10.0.0.1:56764.service: Deactivated successfully. Feb 13 20:47:05.045082 systemd[1]: session-93.scope: Deactivated successfully. Feb 13 20:47:05.045749 systemd-logind[1417]: Session 93 logged out. Waiting for processes to exit. Feb 13 20:47:05.046851 systemd-logind[1417]: Removed session 93. Feb 13 20:47:07.412916 kubelet[2423]: E0213 20:47:07.412865 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:10.050486 systemd[1]: Started sshd@93-10.0.0.8:22-10.0.0.1:56776.service - OpenSSH per-connection server daemon (10.0.0.1:56776). Feb 13 20:47:10.082871 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 56776 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:10.084073 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:10.088344 systemd-logind[1417]: New session 94 of user core. Feb 13 20:47:10.098036 systemd[1]: Started session-94.scope - Session 94 of User core. Feb 13 20:47:10.203615 sshd[4195]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:10.206170 systemd[1]: sshd@93-10.0.0.8:22-10.0.0.1:56776.service: Deactivated successfully. Feb 13 20:47:10.207751 systemd[1]: session-94.scope: Deactivated successfully. Feb 13 20:47:10.209011 systemd-logind[1417]: Session 94 logged out. Waiting for processes to exit. Feb 13 20:47:10.209873 systemd-logind[1417]: Removed session 94. Feb 13 20:47:10.265709 kubelet[2423]: E0213 20:47:10.265656 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:10.266423 kubelet[2423]: E0213 20:47:10.266391 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:47:12.414001 kubelet[2423]: E0213 20:47:12.413939 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:15.217465 systemd[1]: Started sshd@94-10.0.0.8:22-10.0.0.1:46594.service - OpenSSH per-connection server daemon (10.0.0.1:46594). Feb 13 20:47:15.249730 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 46594 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:15.251014 sshd[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:15.255361 systemd-logind[1417]: New session 95 of user core. Feb 13 20:47:15.262046 systemd[1]: Started session-95.scope - Session 95 of User core. Feb 13 20:47:15.368430 sshd[4213]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:15.371562 systemd[1]: sshd@94-10.0.0.8:22-10.0.0.1:46594.service: Deactivated successfully. Feb 13 20:47:15.373218 systemd[1]: session-95.scope: Deactivated successfully. Feb 13 20:47:15.373782 systemd-logind[1417]: Session 95 logged out. Waiting for processes to exit. Feb 13 20:47:15.374738 systemd-logind[1417]: Removed session 95. Feb 13 20:47:16.266082 kubelet[2423]: E0213 20:47:16.266038 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:17.415416 kubelet[2423]: E0213 20:47:17.415357 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:18.266090 kubelet[2423]: E0213 20:47:18.266049 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:20.382463 systemd[1]: Started sshd@95-10.0.0.8:22-10.0.0.1:46608.service - OpenSSH per-connection server daemon (10.0.0.1:46608). Feb 13 20:47:20.414049 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 46608 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:20.415254 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:20.419503 systemd-logind[1417]: New session 96 of user core. Feb 13 20:47:20.433045 systemd[1]: Started session-96.scope - Session 96 of User core. Feb 13 20:47:20.538832 sshd[4229]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:20.542023 systemd[1]: sshd@95-10.0.0.8:22-10.0.0.1:46608.service: Deactivated successfully. Feb 13 20:47:20.543859 systemd[1]: session-96.scope: Deactivated successfully. Feb 13 20:47:20.544533 systemd-logind[1417]: Session 96 logged out. Waiting for processes to exit. Feb 13 20:47:20.545286 systemd-logind[1417]: Removed session 96. Feb 13 20:47:22.416492 kubelet[2423]: E0213 20:47:22.416406 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:23.265816 kubelet[2423]: E0213 20:47:23.265782 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:23.266466 kubelet[2423]: E0213 20:47:23.266415 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:47:25.552440 systemd[1]: Started sshd@96-10.0.0.8:22-10.0.0.1:39758.service - OpenSSH per-connection server daemon (10.0.0.1:39758). Feb 13 20:47:25.583990 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 39758 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:25.585176 sshd[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:25.588483 systemd-logind[1417]: New session 97 of user core. Feb 13 20:47:25.599123 systemd[1]: Started session-97.scope - Session 97 of User core. Feb 13 20:47:25.704282 sshd[4244]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:25.707671 systemd[1]: sshd@96-10.0.0.8:22-10.0.0.1:39758.service: Deactivated successfully. Feb 13 20:47:25.710160 systemd[1]: session-97.scope: Deactivated successfully. Feb 13 20:47:25.711123 systemd-logind[1417]: Session 97 logged out. Waiting for processes to exit. Feb 13 20:47:25.711990 systemd-logind[1417]: Removed session 97. Feb 13 20:47:27.417995 kubelet[2423]: E0213 20:47:27.417940 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:30.714439 systemd[1]: Started sshd@97-10.0.0.8:22-10.0.0.1:39762.service - OpenSSH per-connection server daemon (10.0.0.1:39762). Feb 13 20:47:30.746782 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 39762 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:30.748100 sshd[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:30.751387 systemd-logind[1417]: New session 98 of user core. Feb 13 20:47:30.766035 systemd[1]: Started session-98.scope - Session 98 of User core. Feb 13 20:47:30.870656 sshd[4259]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:30.874240 systemd[1]: sshd@97-10.0.0.8:22-10.0.0.1:39762.service: Deactivated successfully. Feb 13 20:47:30.876063 systemd[1]: session-98.scope: Deactivated successfully. Feb 13 20:47:30.876700 systemd-logind[1417]: Session 98 logged out. Waiting for processes to exit. Feb 13 20:47:30.877577 systemd-logind[1417]: Removed session 98. Feb 13 20:47:32.418748 kubelet[2423]: E0213 20:47:32.418692 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:35.267308 kubelet[2423]: E0213 20:47:35.267260 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:35.267742 kubelet[2423]: E0213 20:47:35.267361 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:35.267948 kubelet[2423]: E0213 20:47:35.267889 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:47:35.882408 systemd[1]: Started sshd@98-10.0.0.8:22-10.0.0.1:41544.service - OpenSSH per-connection server daemon (10.0.0.1:41544). Feb 13 20:47:35.914543 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 41544 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:35.915665 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:35.919248 systemd-logind[1417]: New session 99 of user core. Feb 13 20:47:35.933037 systemd[1]: Started session-99.scope - Session 99 of User core. Feb 13 20:47:36.039529 sshd[4273]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:36.042734 systemd[1]: sshd@98-10.0.0.8:22-10.0.0.1:41544.service: Deactivated successfully. Feb 13 20:47:36.044362 systemd[1]: session-99.scope: Deactivated successfully. Feb 13 20:47:36.044931 systemd-logind[1417]: Session 99 logged out. Waiting for processes to exit. Feb 13 20:47:36.045688 systemd-logind[1417]: Removed session 99. Feb 13 20:47:37.419855 kubelet[2423]: E0213 20:47:37.419815 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:41.049501 systemd[1]: Started sshd@99-10.0.0.8:22-10.0.0.1:41550.service - OpenSSH per-connection server daemon (10.0.0.1:41550). Feb 13 20:47:41.080426 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 41550 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:41.081615 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:41.085577 systemd-logind[1417]: New session 100 of user core. Feb 13 20:47:41.097044 systemd[1]: Started session-100.scope - Session 100 of User core. Feb 13 20:47:41.200515 sshd[4289]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:41.203334 systemd[1]: sshd@99-10.0.0.8:22-10.0.0.1:41550.service: Deactivated successfully. Feb 13 20:47:41.206350 systemd[1]: session-100.scope: Deactivated successfully. Feb 13 20:47:41.206948 systemd-logind[1417]: Session 100 logged out. Waiting for processes to exit. Feb 13 20:47:41.207772 systemd-logind[1417]: Removed session 100. Feb 13 20:47:42.421343 kubelet[2423]: E0213 20:47:42.421303 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:43.266026 kubelet[2423]: E0213 20:47:43.265997 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:46.210410 systemd[1]: Started sshd@100-10.0.0.8:22-10.0.0.1:46176.service - OpenSSH per-connection server daemon (10.0.0.1:46176). Feb 13 20:47:46.242484 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 46176 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:46.243678 sshd[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:46.247247 systemd-logind[1417]: New session 101 of user core. Feb 13 20:47:46.258102 systemd[1]: Started session-101.scope - Session 101 of User core. Feb 13 20:47:46.362928 sshd[4306]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:46.366061 systemd[1]: sshd@100-10.0.0.8:22-10.0.0.1:46176.service: Deactivated successfully. Feb 13 20:47:46.368414 systemd[1]: session-101.scope: Deactivated successfully. Feb 13 20:47:46.369259 systemd-logind[1417]: Session 101 logged out. Waiting for processes to exit. Feb 13 20:47:46.370179 systemd-logind[1417]: Removed session 101. Feb 13 20:47:47.422333 kubelet[2423]: E0213 20:47:47.422295 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:48.266187 kubelet[2423]: E0213 20:47:48.266154 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:48.266807 kubelet[2423]: E0213 20:47:48.266774 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:47:51.373649 systemd[1]: Started sshd@101-10.0.0.8:22-10.0.0.1:46184.service - OpenSSH per-connection server daemon (10.0.0.1:46184). Feb 13 20:47:51.405482 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 46184 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:51.406659 sshd[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:51.410650 systemd-logind[1417]: New session 102 of user core. Feb 13 20:47:51.425062 systemd[1]: Started session-102.scope - Session 102 of User core. Feb 13 20:47:51.529644 sshd[4321]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:51.532782 systemd[1]: sshd@101-10.0.0.8:22-10.0.0.1:46184.service: Deactivated successfully. Feb 13 20:47:51.534462 systemd[1]: session-102.scope: Deactivated successfully. Feb 13 20:47:51.535377 systemd-logind[1417]: Session 102 logged out. Waiting for processes to exit. Feb 13 20:47:51.536129 systemd-logind[1417]: Removed session 102. Feb 13 20:47:52.423980 kubelet[2423]: E0213 20:47:52.423945 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:56.542474 systemd[1]: Started sshd@102-10.0.0.8:22-10.0.0.1:54666.service - OpenSSH per-connection server daemon (10.0.0.1:54666). Feb 13 20:47:56.574336 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 54666 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:56.575494 sshd[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:56.579433 systemd-logind[1417]: New session 103 of user core. Feb 13 20:47:56.590119 systemd[1]: Started session-103.scope - Session 103 of User core. Feb 13 20:47:56.692117 sshd[4335]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:56.695518 systemd-logind[1417]: Session 103 logged out. Waiting for processes to exit. Feb 13 20:47:56.695868 systemd[1]: sshd@102-10.0.0.8:22-10.0.0.1:54666.service: Deactivated successfully. Feb 13 20:47:56.698100 systemd[1]: session-103.scope: Deactivated successfully. Feb 13 20:47:56.700176 systemd-logind[1417]: Removed session 103. Feb 13 20:47:57.425154 kubelet[2423]: E0213 20:47:57.425117 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:01.266648 kubelet[2423]: E0213 20:48:01.266375 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:01.267128 kubelet[2423]: E0213 20:48:01.267083 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:48:01.702406 systemd[1]: Started sshd@103-10.0.0.8:22-10.0.0.1:54682.service - OpenSSH per-connection server daemon (10.0.0.1:54682). Feb 13 20:48:01.734066 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 54682 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:01.735206 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:01.739046 systemd-logind[1417]: New session 104 of user core. Feb 13 20:48:01.745027 systemd[1]: Started session-104.scope - Session 104 of User core. Feb 13 20:48:01.847285 sshd[4349]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:01.850375 systemd[1]: sshd@103-10.0.0.8:22-10.0.0.1:54682.service: Deactivated successfully. Feb 13 20:48:01.852581 systemd[1]: session-104.scope: Deactivated successfully. Feb 13 20:48:01.853229 systemd-logind[1417]: Session 104 logged out. Waiting for processes to exit. Feb 13 20:48:01.854147 systemd-logind[1417]: Removed session 104. Feb 13 20:48:02.426575 kubelet[2423]: E0213 20:48:02.426516 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:06.858433 systemd[1]: Started sshd@104-10.0.0.8:22-10.0.0.1:49162.service - OpenSSH per-connection server daemon (10.0.0.1:49162). Feb 13 20:48:06.890325 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 49162 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:06.891520 sshd[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:06.895688 systemd-logind[1417]: New session 105 of user core. Feb 13 20:48:06.907025 systemd[1]: Started session-105.scope - Session 105 of User core. Feb 13 20:48:07.008227 sshd[4364]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:07.011408 systemd[1]: sshd@104-10.0.0.8:22-10.0.0.1:49162.service: Deactivated successfully. Feb 13 20:48:07.013015 systemd[1]: session-105.scope: Deactivated successfully. Feb 13 20:48:07.014189 systemd-logind[1417]: Session 105 logged out. Waiting for processes to exit. Feb 13 20:48:07.015164 systemd-logind[1417]: Removed session 105. Feb 13 20:48:07.427329 kubelet[2423]: E0213 20:48:07.427277 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:12.018556 systemd[1]: Started sshd@105-10.0.0.8:22-10.0.0.1:49172.service - OpenSSH per-connection server daemon (10.0.0.1:49172). Feb 13 20:48:12.050521 sshd[4378]: Accepted publickey for core from 10.0.0.1 port 49172 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:12.051718 sshd[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:12.055048 systemd-logind[1417]: New session 106 of user core. Feb 13 20:48:12.062022 systemd[1]: Started session-106.scope - Session 106 of User core. Feb 13 20:48:12.165646 sshd[4378]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:12.168883 systemd[1]: sshd@105-10.0.0.8:22-10.0.0.1:49172.service: Deactivated successfully. Feb 13 20:48:12.170448 systemd[1]: session-106.scope: Deactivated successfully. Feb 13 20:48:12.171846 systemd-logind[1417]: Session 106 logged out. Waiting for processes to exit. Feb 13 20:48:12.172675 systemd-logind[1417]: Removed session 106. Feb 13 20:48:12.428159 kubelet[2423]: E0213 20:48:12.428105 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:14.265496 kubelet[2423]: E0213 20:48:14.265461 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:14.266354 kubelet[2423]: E0213 20:48:14.266018 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:48:17.176245 systemd[1]: Started sshd@106-10.0.0.8:22-10.0.0.1:54612.service - OpenSSH per-connection server daemon (10.0.0.1:54612). Feb 13 20:48:17.208497 sshd[4395]: Accepted publickey for core from 10.0.0.1 port 54612 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:17.209705 sshd[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:17.213601 systemd-logind[1417]: New session 107 of user core. Feb 13 20:48:17.226038 systemd[1]: Started session-107.scope - Session 107 of User core. Feb 13 20:48:17.330117 sshd[4395]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:17.332576 systemd[1]: session-107.scope: Deactivated successfully. Feb 13 20:48:17.333809 systemd[1]: sshd@106-10.0.0.8:22-10.0.0.1:54612.service: Deactivated successfully. Feb 13 20:48:17.336769 systemd-logind[1417]: Session 107 logged out. Waiting for processes to exit. Feb 13 20:48:17.337680 systemd-logind[1417]: Removed session 107. Feb 13 20:48:17.429086 kubelet[2423]: E0213 20:48:17.428998 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:22.344631 systemd[1]: Started sshd@107-10.0.0.8:22-10.0.0.1:54618.service - OpenSSH per-connection server daemon (10.0.0.1:54618). Feb 13 20:48:22.376505 sshd[4410]: Accepted publickey for core from 10.0.0.1 port 54618 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:22.377748 sshd[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:22.381697 systemd-logind[1417]: New session 108 of user core. Feb 13 20:48:22.389028 systemd[1]: Started session-108.scope - Session 108 of User core. Feb 13 20:48:22.429813 kubelet[2423]: E0213 20:48:22.429779 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:22.492047 sshd[4410]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:22.495258 systemd[1]: sshd@107-10.0.0.8:22-10.0.0.1:54618.service: Deactivated successfully. Feb 13 20:48:22.496881 systemd[1]: session-108.scope: Deactivated successfully. Feb 13 20:48:22.497530 systemd-logind[1417]: Session 108 logged out. Waiting for processes to exit. Feb 13 20:48:22.498493 systemd-logind[1417]: Removed session 108. Feb 13 20:48:27.431033 kubelet[2423]: E0213 20:48:27.430989 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:27.505546 systemd[1]: Started sshd@108-10.0.0.8:22-10.0.0.1:38198.service - OpenSSH per-connection server daemon (10.0.0.1:38198). Feb 13 20:48:27.537456 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 38198 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:27.538638 sshd[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:27.542728 systemd-logind[1417]: New session 109 of user core. Feb 13 20:48:27.556041 systemd[1]: Started session-109.scope - Session 109 of User core. Feb 13 20:48:27.662151 sshd[4425]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:27.664881 systemd[1]: sshd@108-10.0.0.8:22-10.0.0.1:38198.service: Deactivated successfully. Feb 13 20:48:27.666563 systemd[1]: session-109.scope: Deactivated successfully. Feb 13 20:48:27.667986 systemd-logind[1417]: Session 109 logged out. Waiting for processes to exit. Feb 13 20:48:27.668968 systemd-logind[1417]: Removed session 109. Feb 13 20:48:29.265862 kubelet[2423]: E0213 20:48:29.265643 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:29.266295 kubelet[2423]: E0213 20:48:29.266244 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:48:32.432563 kubelet[2423]: E0213 20:48:32.432523 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:32.672525 systemd[1]: Started sshd@109-10.0.0.8:22-10.0.0.1:60804.service - OpenSSH per-connection server daemon (10.0.0.1:60804). Feb 13 20:48:32.704643 sshd[4440]: Accepted publickey for core from 10.0.0.1 port 60804 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:32.705767 sshd[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:32.709234 systemd-logind[1417]: New session 110 of user core. Feb 13 20:48:32.720036 systemd[1]: Started session-110.scope - Session 110 of User core. Feb 13 20:48:32.824595 sshd[4440]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:32.827879 systemd[1]: sshd@109-10.0.0.8:22-10.0.0.1:60804.service: Deactivated successfully. Feb 13 20:48:32.829608 systemd[1]: session-110.scope: Deactivated successfully. Feb 13 20:48:32.830305 systemd-logind[1417]: Session 110 logged out. Waiting for processes to exit. Feb 13 20:48:32.831619 systemd-logind[1417]: Removed session 110. Feb 13 20:48:36.265884 kubelet[2423]: E0213 20:48:36.265781 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:37.433428 kubelet[2423]: E0213 20:48:37.433382 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:37.835620 systemd[1]: Started sshd@110-10.0.0.8:22-10.0.0.1:60806.service - OpenSSH per-connection server daemon (10.0.0.1:60806). Feb 13 20:48:37.869539 sshd[4456]: Accepted publickey for core from 10.0.0.1 port 60806 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:37.870757 sshd[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:37.874343 systemd-logind[1417]: New session 111 of user core. Feb 13 20:48:37.886023 systemd[1]: Started session-111.scope - Session 111 of User core. Feb 13 20:48:37.989830 sshd[4456]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:37.993677 systemd[1]: sshd@110-10.0.0.8:22-10.0.0.1:60806.service: Deactivated successfully. Feb 13 20:48:37.996167 systemd[1]: session-111.scope: Deactivated successfully. Feb 13 20:48:37.997042 systemd-logind[1417]: Session 111 logged out. Waiting for processes to exit. Feb 13 20:48:37.997853 systemd-logind[1417]: Removed session 111. Feb 13 20:48:39.265966 kubelet[2423]: E0213 20:48:39.265887 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:42.434246 kubelet[2423]: E0213 20:48:42.434193 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:43.000408 systemd[1]: Started sshd@111-10.0.0.8:22-10.0.0.1:34052.service - OpenSSH per-connection server daemon (10.0.0.1:34052). Feb 13 20:48:43.031870 sshd[4470]: Accepted publickey for core from 10.0.0.1 port 34052 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:43.033052 sshd[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:43.036464 systemd-logind[1417]: New session 112 of user core. Feb 13 20:48:43.050091 systemd[1]: Started session-112.scope - Session 112 of User core. Feb 13 20:48:43.152830 sshd[4470]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:43.155993 systemd[1]: sshd@111-10.0.0.8:22-10.0.0.1:34052.service: Deactivated successfully. Feb 13 20:48:43.158118 systemd[1]: session-112.scope: Deactivated successfully. Feb 13 20:48:43.158842 systemd-logind[1417]: Session 112 logged out. Waiting for processes to exit. Feb 13 20:48:43.159660 systemd-logind[1417]: Removed session 112. Feb 13 20:48:44.266210 kubelet[2423]: E0213 20:48:44.266176 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:44.266732 kubelet[2423]: E0213 20:48:44.266404 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:44.267600 kubelet[2423]: E0213 20:48:44.267541 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:48:47.435568 kubelet[2423]: E0213 20:48:47.435510 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:48.163677 systemd[1]: Started sshd@112-10.0.0.8:22-10.0.0.1:34066.service - OpenSSH per-connection server daemon (10.0.0.1:34066). Feb 13 20:48:48.196413 sshd[4486]: Accepted publickey for core from 10.0.0.1 port 34066 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:48.197628 sshd[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:48.201089 systemd-logind[1417]: New session 113 of user core. Feb 13 20:48:48.213044 systemd[1]: Started session-113.scope - Session 113 of User core. Feb 13 20:48:48.318446 sshd[4486]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:48.321871 systemd[1]: sshd@112-10.0.0.8:22-10.0.0.1:34066.service: Deactivated successfully. Feb 13 20:48:48.323620 systemd[1]: session-113.scope: Deactivated successfully. Feb 13 20:48:48.324367 systemd-logind[1417]: Session 113 logged out. Waiting for processes to exit. Feb 13 20:48:48.325182 systemd-logind[1417]: Removed session 113. Feb 13 20:48:52.266155 kubelet[2423]: E0213 20:48:52.266117 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:52.436921 kubelet[2423]: E0213 20:48:52.436876 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:53.328892 systemd[1]: Started sshd@113-10.0.0.8:22-10.0.0.1:37034.service - OpenSSH per-connection server daemon (10.0.0.1:37034). Feb 13 20:48:53.361192 sshd[4500]: Accepted publickey for core from 10.0.0.1 port 37034 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:53.362459 sshd[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:53.365863 systemd-logind[1417]: New session 114 of user core. Feb 13 20:48:53.381088 systemd[1]: Started session-114.scope - Session 114 of User core. Feb 13 20:48:53.482491 sshd[4500]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:53.485672 systemd[1]: sshd@113-10.0.0.8:22-10.0.0.1:37034.service: Deactivated successfully. Feb 13 20:48:53.487290 systemd[1]: session-114.scope: Deactivated successfully. Feb 13 20:48:53.487868 systemd-logind[1417]: Session 114 logged out. Waiting for processes to exit. Feb 13 20:48:53.488788 systemd-logind[1417]: Removed session 114. Feb 13 20:48:55.266731 kubelet[2423]: E0213 20:48:55.266353 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:55.267467 kubelet[2423]: E0213 20:48:55.267378 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:48:57.438016 kubelet[2423]: E0213 20:48:57.437978 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:58.493490 systemd[1]: Started sshd@114-10.0.0.8:22-10.0.0.1:37050.service - OpenSSH per-connection server daemon (10.0.0.1:37050). Feb 13 20:48:58.525262 sshd[4515]: Accepted publickey for core from 10.0.0.1 port 37050 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:58.526511 sshd[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:58.530092 systemd-logind[1417]: New session 115 of user core. Feb 13 20:48:58.543028 systemd[1]: Started session-115.scope - Session 115 of User core. Feb 13 20:48:58.646852 sshd[4515]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:58.650046 systemd[1]: sshd@114-10.0.0.8:22-10.0.0.1:37050.service: Deactivated successfully. Feb 13 20:48:58.652443 systemd[1]: session-115.scope: Deactivated successfully. Feb 13 20:48:58.653321 systemd-logind[1417]: Session 115 logged out. Waiting for processes to exit. Feb 13 20:48:58.654128 systemd-logind[1417]: Removed session 115. Feb 13 20:49:02.439535 kubelet[2423]: E0213 20:49:02.439417 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:49:03.657698 systemd[1]: Started sshd@115-10.0.0.8:22-10.0.0.1:44282.service - OpenSSH per-connection server daemon (10.0.0.1:44282). Feb 13 20:49:03.689315 sshd[4530]: Accepted publickey for core from 10.0.0.1 port 44282 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:49:03.690541 sshd[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:49:03.694666 systemd-logind[1417]: New session 116 of user core. Feb 13 20:49:03.704056 systemd[1]: Started session-116.scope - Session 116 of User core. Feb 13 20:49:03.809134 sshd[4530]: pam_unix(sshd:session): session closed for user core Feb 13 20:49:03.812363 systemd[1]: sshd@115-10.0.0.8:22-10.0.0.1:44282.service: Deactivated successfully. Feb 13 20:49:03.815025 systemd[1]: session-116.scope: Deactivated successfully. Feb 13 20:49:03.816003 systemd-logind[1417]: Session 116 logged out. Waiting for processes to exit. Feb 13 20:49:03.816885 systemd-logind[1417]: Removed session 116. Feb 13 20:49:07.440452 kubelet[2423]: E0213 20:49:07.440414 2423 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:49:08.265582 kubelet[2423]: E0213 20:49:08.265542 2423 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:49:08.266249 kubelet[2423]: E0213 20:49:08.266202 2423 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-pdsbm" podUID="e24b7648-6eb5-485b-97d3-9e8d6d3764bf" Feb 13 20:49:08.819960 systemd[1]: Started sshd@116-10.0.0.8:22-10.0.0.1:44290.service - OpenSSH per-connection server daemon (10.0.0.1:44290). Feb 13 20:49:08.852160 sshd[4545]: Accepted publickey for core from 10.0.0.1 port 44290 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:49:08.853415 sshd[4545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:49:08.856883 systemd-logind[1417]: New session 117 of user core. Feb 13 20:49:08.871085 systemd[1]: Started session-117.scope - Session 117 of User core. Feb 13 20:49:08.972737 sshd[4545]: pam_unix(sshd:session): session closed for user core Feb 13 20:49:08.975961 systemd[1]: sshd@116-10.0.0.8:22-10.0.0.1:44290.service: Deactivated successfully. Feb 13 20:49:08.977639 systemd[1]: session-117.scope: Deactivated successfully. Feb 13 20:49:08.978356 systemd-logind[1417]: Session 117 logged out. Waiting for processes to exit. Feb 13 20:49:08.979228 systemd-logind[1417]: Removed session 117.