Feb 13 20:27:53.961688 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:27:53.961709 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 20:27:53.961719 kernel: KASLR enabled Feb 13 20:27:53.961726 kernel: efi: EFI v2.7 by EDK II Feb 13 20:27:53.961732 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 20:27:53.961739 kernel: random: crng init done Feb 13 20:27:53.961746 kernel: ACPI: Early table checksum verification disabled Feb 13 20:27:53.961753 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 20:27:53.961760 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 20:27:53.961768 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.961775 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.961781 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.961788 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.961794 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.961802 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.961810 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.961825 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.961832 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:27:53.961839 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 20:27:53.961846 kernel: NUMA: Failed to initialise from firmware Feb 13 20:27:53.961854 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:27:53.961861 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 20:27:53.961867 kernel: Zone ranges: Feb 13 20:27:53.961874 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:27:53.961881 kernel: DMA32 empty Feb 13 20:27:53.961890 kernel: Normal empty Feb 13 20:27:53.961897 kernel: Movable zone start for each node Feb 13 20:27:53.961903 kernel: Early memory node ranges Feb 13 20:27:53.961910 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 20:27:53.961917 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 20:27:53.961924 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 20:27:53.961931 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 20:27:53.961938 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 20:27:53.961945 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 20:27:53.961952 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 20:27:53.961959 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:27:53.961965 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 20:27:53.961973 kernel: psci: probing for conduit method from ACPI. Feb 13 20:27:53.961980 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:27:53.961988 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:27:53.961997 kernel: psci: Trusted OS migration not required Feb 13 20:27:53.962005 kernel: psci: SMC Calling Convention v1.1 Feb 13 20:27:53.962012 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 20:27:53.962021 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:27:53.962028 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:27:53.962036 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 20:27:53.962043 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:27:53.962050 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:27:53.962058 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:27:53.962065 kernel: CPU features: detected: Spectre-v4 Feb 13 20:27:53.962072 kernel: CPU features: detected: Spectre-BHB Feb 13 20:27:53.962079 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:27:53.962086 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:27:53.962095 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:27:53.962103 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:27:53.962110 kernel: alternatives: applying boot alternatives Feb 13 20:27:53.962119 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:27:53.962126 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:27:53.962134 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:27:53.962141 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:27:53.962148 kernel: Fallback order for Node 0: 0 Feb 13 20:27:53.962155 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 20:27:53.962163 kernel: Policy zone: DMA Feb 13 20:27:53.962170 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:27:53.962178 kernel: software IO TLB: area num 4. Feb 13 20:27:53.962186 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 20:27:53.962194 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Feb 13 20:27:53.962201 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 20:27:53.962214 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:27:53.962222 kernel: rcu: RCU event tracing is enabled. Feb 13 20:27:53.962230 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 20:27:53.962237 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:27:53.962245 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:27:53.962252 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:27:53.962260 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 20:27:53.962267 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:27:53.962276 kernel: GICv3: 256 SPIs implemented Feb 13 20:27:53.962284 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:27:53.962291 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:27:53.962306 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:27:53.962314 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 20:27:53.962321 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 20:27:53.962329 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 20:27:53.962336 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 20:27:53.962344 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 20:27:53.962351 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 20:27:53.962358 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:27:53.962367 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:27:53.962375 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:27:53.962383 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:27:53.962390 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:27:53.962397 kernel: arm-pv: using stolen time PV Feb 13 20:27:53.962405 kernel: Console: colour dummy device 80x25 Feb 13 20:27:53.962413 kernel: ACPI: Core revision 20230628 Feb 13 20:27:53.962421 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:27:53.962428 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:27:53.962436 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:27:53.962444 kernel: landlock: Up and running. Feb 13 20:27:53.962452 kernel: SELinux: Initializing. Feb 13 20:27:53.962459 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:27:53.962467 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:27:53.962475 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:27:53.962482 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:27:53.962490 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:27:53.962498 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:27:53.962505 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 20:27:53.962514 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 20:27:53.962521 kernel: Remapping and enabling EFI services. Feb 13 20:27:53.962529 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:27:53.962536 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:27:53.962544 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 20:27:53.962552 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 20:27:53.962559 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:27:53.962567 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:27:53.962574 kernel: Detected PIPT I-cache on CPU2 Feb 13 20:27:53.962582 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 20:27:53.962591 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 20:27:53.962599 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:27:53.962612 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 20:27:53.962622 kernel: Detected PIPT I-cache on CPU3 Feb 13 20:27:53.962630 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 20:27:53.962638 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 20:27:53.962646 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:27:53.962653 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 20:27:53.962661 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 20:27:53.962671 kernel: SMP: Total of 4 processors activated. Feb 13 20:27:53.962679 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:27:53.962687 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:27:53.962695 kernel: CPU features: detected: Common not Private translations Feb 13 20:27:53.962703 kernel: CPU features: detected: CRC32 instructions Feb 13 20:27:53.962711 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 20:27:53.962719 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:27:53.962739 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:27:53.962749 kernel: CPU features: detected: Privileged Access Never Feb 13 20:27:53.962757 kernel: CPU features: detected: RAS Extension Support Feb 13 20:27:53.962765 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 20:27:53.962773 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:27:53.962781 kernel: alternatives: applying system-wide alternatives Feb 13 20:27:53.962789 kernel: devtmpfs: initialized Feb 13 20:27:53.962797 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:27:53.962805 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 20:27:53.962813 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:27:53.962823 kernel: SMBIOS 3.0.0 present. Feb 13 20:27:53.962831 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 20:27:53.962839 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:27:53.962847 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:27:53.962855 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:27:53.962863 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:27:53.962871 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:27:53.962879 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Feb 13 20:27:53.962887 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:27:53.962896 kernel: cpuidle: using governor menu Feb 13 20:27:53.962904 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:27:53.962912 kernel: ASID allocator initialised with 32768 entries Feb 13 20:27:53.962920 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:27:53.962928 kernel: Serial: AMBA PL011 UART driver Feb 13 20:27:53.962936 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:27:53.962944 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:27:53.962952 kernel: Modules: 509040 pages in range for PLT usage Feb 13 20:27:53.962960 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:27:53.962969 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:27:53.962977 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:27:53.962985 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:27:53.962993 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:27:53.963001 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:27:53.963009 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:27:53.963017 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:27:53.963025 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:27:53.963033 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:27:53.963042 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:27:53.963050 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:27:53.963058 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:27:53.963066 kernel: ACPI: Interpreter enabled Feb 13 20:27:53.963074 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:27:53.963081 kernel: ACPI: MCFG table detected, 1 entries Feb 13 20:27:53.963089 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:27:53.963097 kernel: printk: console [ttyAMA0] enabled Feb 13 20:27:53.963105 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:27:53.963247 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:27:53.963361 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 20:27:53.963434 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 20:27:53.963502 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 20:27:53.963571 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 20:27:53.963582 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 20:27:53.963590 kernel: PCI host bridge to bus 0000:00 Feb 13 20:27:53.963667 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 20:27:53.963730 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 20:27:53.963794 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 20:27:53.963856 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:27:53.963942 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 20:27:53.964027 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:27:53.964104 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 20:27:53.964177 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 20:27:53.964322 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:27:53.964407 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:27:53.964482 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 20:27:53.964555 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 20:27:53.964621 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 20:27:53.964684 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 20:27:53.964753 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 20:27:53.964763 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 20:27:53.964772 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 20:27:53.964780 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 20:27:53.964788 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 20:27:53.964796 kernel: iommu: Default domain type: Translated Feb 13 20:27:53.964804 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:27:53.964812 kernel: efivars: Registered efivars operations Feb 13 20:27:53.964823 kernel: vgaarb: loaded Feb 13 20:27:53.964831 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:27:53.964839 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:27:53.964847 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:27:53.964855 kernel: pnp: PnP ACPI init Feb 13 20:27:53.964930 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 20:27:53.964942 kernel: pnp: PnP ACPI: found 1 devices Feb 13 20:27:53.964950 kernel: NET: Registered PF_INET protocol family Feb 13 20:27:53.964960 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:27:53.964968 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:27:53.964977 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:27:53.964985 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:27:53.964993 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:27:53.965001 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:27:53.965009 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:27:53.965017 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:27:53.965025 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:27:53.965035 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:27:53.965044 kernel: kvm [1]: HYP mode not available Feb 13 20:27:53.965051 kernel: Initialise system trusted keyrings Feb 13 20:27:53.965059 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:27:53.965067 kernel: Key type asymmetric registered Feb 13 20:27:53.965075 kernel: Asymmetric key parser 'x509' registered Feb 13 20:27:53.965083 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:27:53.965091 kernel: io scheduler mq-deadline registered Feb 13 20:27:53.965099 kernel: io scheduler kyber registered Feb 13 20:27:53.965108 kernel: io scheduler bfq registered Feb 13 20:27:53.965117 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 20:27:53.965125 kernel: ACPI: button: Power Button [PWRB] Feb 13 20:27:53.965133 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 20:27:53.965212 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 20:27:53.965224 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:27:53.965232 kernel: thunder_xcv, ver 1.0 Feb 13 20:27:53.965240 kernel: thunder_bgx, ver 1.0 Feb 13 20:27:53.965248 kernel: nicpf, ver 1.0 Feb 13 20:27:53.965258 kernel: nicvf, ver 1.0 Feb 13 20:27:53.965358 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:27:53.965427 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:27:53 UTC (1739478473) Feb 13 20:27:53.965438 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:27:53.965446 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 20:27:53.965455 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:27:53.965463 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:27:53.965471 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:27:53.965482 kernel: Segment Routing with IPv6 Feb 13 20:27:53.965490 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:27:53.965498 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:27:53.965506 kernel: Key type dns_resolver registered Feb 13 20:27:53.965514 kernel: registered taskstats version 1 Feb 13 20:27:53.965522 kernel: Loading compiled-in X.509 certificates Feb 13 20:27:53.965530 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 20:27:53.965538 kernel: Key type .fscrypt registered Feb 13 20:27:53.965546 kernel: Key type fscrypt-provisioning registered Feb 13 20:27:53.965555 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:27:53.965563 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:27:53.965571 kernel: ima: No architecture policies found Feb 13 20:27:53.965579 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:27:53.965587 kernel: clk: Disabling unused clocks Feb 13 20:27:53.965595 kernel: Freeing unused kernel memory: 39360K Feb 13 20:27:53.965603 kernel: Run /init as init process Feb 13 20:27:53.965611 kernel: with arguments: Feb 13 20:27:53.965619 kernel: /init Feb 13 20:27:53.965629 kernel: with environment: Feb 13 20:27:53.965637 kernel: HOME=/ Feb 13 20:27:53.965645 kernel: TERM=linux Feb 13 20:27:53.965652 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:27:53.965662 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:27:53.965672 systemd[1]: Detected virtualization kvm. Feb 13 20:27:53.965681 systemd[1]: Detected architecture arm64. Feb 13 20:27:53.965689 systemd[1]: Running in initrd. Feb 13 20:27:53.965698 systemd[1]: No hostname configured, using default hostname. Feb 13 20:27:53.965707 systemd[1]: Hostname set to . Feb 13 20:27:53.965716 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:27:53.965725 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:27:53.965734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:27:53.965744 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:27:53.965753 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:27:53.965761 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:27:53.965772 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:27:53.965781 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:27:53.965791 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:27:53.965800 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:27:53.965808 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:27:53.965817 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:27:53.965827 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:27:53.965836 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:27:53.965844 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:27:53.965853 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:27:53.965862 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:27:53.965870 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:27:53.965879 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:27:53.965888 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:27:53.965897 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:27:53.965907 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:27:53.965916 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:27:53.965924 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:27:53.965933 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:27:53.965941 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:27:53.965950 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:27:53.965959 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:27:53.965967 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:27:53.965976 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:27:53.965987 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:27:53.965995 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:27:53.966004 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:27:53.966013 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:27:53.966022 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:27:53.966032 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:27:53.966042 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:27:53.966051 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:27:53.966076 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 20:27:53.966098 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:27:53.966107 systemd-journald[237]: Journal started Feb 13 20:27:53.966126 systemd-journald[237]: Runtime Journal (/run/log/journal/edcd30bdd4f54f1a87c6020f2a6b1065) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:27:53.956872 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 20:27:53.969222 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:27:53.973321 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:27:53.976570 kernel: Bridge firewalling registered Feb 13 20:27:53.975705 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:27:53.975869 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 20:27:53.977217 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:27:53.981255 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:27:53.984786 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:27:53.986862 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:27:53.989894 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:27:53.991507 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:27:53.999173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:27:54.002250 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:27:54.006043 dracut-cmdline[272]: dracut-dracut-053 Feb 13 20:27:54.008640 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:27:54.036290 systemd-resolved[281]: Positive Trust Anchors: Feb 13 20:27:54.036316 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:27:54.036348 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:27:54.042232 systemd-resolved[281]: Defaulting to hostname 'linux'. Feb 13 20:27:54.043485 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:27:54.046058 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:27:54.081310 kernel: SCSI subsystem initialized Feb 13 20:27:54.084331 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:27:54.092334 kernel: iscsi: registered transport (tcp) Feb 13 20:27:54.105311 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:27:54.105333 kernel: QLogic iSCSI HBA Driver Feb 13 20:27:54.152037 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:27:54.168448 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:27:54.185584 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:27:54.185632 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:27:54.186741 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:27:54.236328 kernel: raid6: neonx8 gen() 15789 MB/s Feb 13 20:27:54.253318 kernel: raid6: neonx4 gen() 15637 MB/s Feb 13 20:27:54.270314 kernel: raid6: neonx2 gen() 13207 MB/s Feb 13 20:27:54.287316 kernel: raid6: neonx1 gen() 10479 MB/s Feb 13 20:27:54.304320 kernel: raid6: int64x8 gen() 6956 MB/s Feb 13 20:27:54.321321 kernel: raid6: int64x4 gen() 7334 MB/s Feb 13 20:27:54.338319 kernel: raid6: int64x2 gen() 6123 MB/s Feb 13 20:27:54.355417 kernel: raid6: int64x1 gen() 5044 MB/s Feb 13 20:27:54.355435 kernel: raid6: using algorithm neonx8 gen() 15789 MB/s Feb 13 20:27:54.373425 kernel: raid6: .... xor() 11926 MB/s, rmw enabled Feb 13 20:27:54.373439 kernel: raid6: using neon recovery algorithm Feb 13 20:27:54.378313 kernel: xor: measuring software checksum speed Feb 13 20:27:54.379516 kernel: 8regs : 16749 MB/sec Feb 13 20:27:54.379529 kernel: 32regs : 19650 MB/sec Feb 13 20:27:54.380757 kernel: arm64_neon : 26927 MB/sec Feb 13 20:27:54.380769 kernel: xor: using function: arm64_neon (26927 MB/sec) Feb 13 20:27:54.430325 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:27:54.442359 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:27:54.451488 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:27:54.465217 systemd-udevd[460]: Using default interface naming scheme 'v255'. Feb 13 20:27:54.468304 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:27:54.470850 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:27:54.485758 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Feb 13 20:27:54.512891 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:27:54.525440 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:27:54.564475 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:27:54.571630 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:27:54.584496 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:27:54.586475 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:27:54.590370 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:27:54.591830 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:27:54.605513 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:27:54.617691 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:27:54.621960 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 20:27:54.629540 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 20:27:54.629648 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:27:54.629660 kernel: GPT:9289727 != 19775487 Feb 13 20:27:54.629670 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:27:54.629680 kernel: GPT:9289727 != 19775487 Feb 13 20:27:54.629692 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:27:54.629703 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:27:54.617764 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:27:54.621346 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:27:54.623072 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:27:54.623136 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:27:54.624370 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:27:54.634469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:27:54.635962 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:27:54.648021 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:27:54.655547 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:27:54.661418 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (504) Feb 13 20:27:54.661452 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (508) Feb 13 20:27:54.667372 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:27:54.671959 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:27:54.678558 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:27:54.679806 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:27:54.683098 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:27:54.689943 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:27:54.701452 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:27:54.708588 disk-uuid[560]: Primary Header is updated. Feb 13 20:27:54.708588 disk-uuid[560]: Secondary Entries is updated. Feb 13 20:27:54.708588 disk-uuid[560]: Secondary Header is updated. Feb 13 20:27:54.713323 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:27:54.724343 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:27:55.729350 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:27:55.729996 disk-uuid[561]: The operation has completed successfully. Feb 13 20:27:55.747685 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:27:55.747783 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:27:55.777447 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:27:55.780208 sh[572]: Success Feb 13 20:27:55.793326 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:27:55.823657 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:27:55.835701 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:27:55.837608 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:27:55.847529 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 20:27:55.847573 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:27:55.847585 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:27:55.849945 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:27:55.849961 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:27:55.853441 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:27:55.854755 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:27:55.855529 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:27:55.858426 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:27:55.870393 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:27:55.870440 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:27:55.870452 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:27:55.873320 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:27:55.880712 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:27:55.883338 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:27:55.888583 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:27:55.897470 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:27:55.957774 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:27:55.969657 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:27:55.995281 systemd-networkd[765]: lo: Link UP Feb 13 20:27:55.995293 systemd-networkd[765]: lo: Gained carrier Feb 13 20:27:55.996021 systemd-networkd[765]: Enumeration completed Feb 13 20:27:55.996379 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:27:55.996540 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:27:55.996543 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:27:55.998289 systemd-networkd[765]: eth0: Link UP Feb 13 20:27:55.998293 systemd-networkd[765]: eth0: Gained carrier Feb 13 20:27:55.998358 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:27:55.999435 systemd[1]: Reached target network.target - Network. Feb 13 20:27:56.009138 ignition[665]: Ignition 2.19.0 Feb 13 20:27:56.009153 ignition[665]: Stage: fetch-offline Feb 13 20:27:56.009196 ignition[665]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:27:56.009206 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:27:56.009432 ignition[665]: parsed url from cmdline: "" Feb 13 20:27:56.009436 ignition[665]: no config URL provided Feb 13 20:27:56.009440 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:27:56.009448 ignition[665]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:27:56.009472 ignition[665]: op(1): [started] loading QEMU firmware config module Feb 13 20:27:56.009477 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 20:27:56.019361 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:27:56.023378 ignition[665]: op(1): [finished] loading QEMU firmware config module Feb 13 20:27:56.044742 ignition[665]: parsing config with SHA512: fa41ace071b6158b6e80d197a5bbd1a302c2f1f8632142eead12ba61f4138e6dc754f6adf7a3400ba0b07022649da7c8268834f6e36c8e477f6583641b2964ca Feb 13 20:27:56.049312 unknown[665]: fetched base config from "system" Feb 13 20:27:56.049324 unknown[665]: fetched user config from "qemu" Feb 13 20:27:56.051352 ignition[665]: fetch-offline: fetch-offline passed Feb 13 20:27:56.051445 ignition[665]: Ignition finished successfully Feb 13 20:27:56.052962 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:27:56.054434 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:27:56.068471 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:27:56.080865 ignition[774]: Ignition 2.19.0 Feb 13 20:27:56.080876 ignition[774]: Stage: kargs Feb 13 20:27:56.081048 ignition[774]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:27:56.081058 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:27:56.081939 ignition[774]: kargs: kargs passed Feb 13 20:27:56.085811 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:27:56.081984 ignition[774]: Ignition finished successfully Feb 13 20:27:56.094465 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:27:56.104429 ignition[783]: Ignition 2.19.0 Feb 13 20:27:56.104438 ignition[783]: Stage: disks Feb 13 20:27:56.104616 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:27:56.104625 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:27:56.105508 ignition[783]: disks: disks passed Feb 13 20:27:56.107588 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:27:56.105554 ignition[783]: Ignition finished successfully Feb 13 20:27:56.109092 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:27:56.111493 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:27:56.113508 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:27:56.115053 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:27:56.116975 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:27:56.127432 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:27:56.137749 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:27:56.141675 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:27:56.157411 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:27:56.205319 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 20:27:56.206081 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:27:56.207403 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:27:56.218396 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:27:56.220836 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:27:56.221902 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:27:56.221961 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:27:56.221982 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:27:56.228078 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:27:56.230819 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:27:56.235270 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (803) Feb 13 20:27:56.235312 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:27:56.235324 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:27:56.237099 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:27:56.239310 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:27:56.240521 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:27:56.275191 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:27:56.279966 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:27:56.284187 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:27:56.288481 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:27:56.360341 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:27:56.372465 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:27:56.374738 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:27:56.379327 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:27:56.394480 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:27:56.396873 ignition[917]: INFO : Ignition 2.19.0 Feb 13 20:27:56.396873 ignition[917]: INFO : Stage: mount Feb 13 20:27:56.398461 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:27:56.398461 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:27:56.398461 ignition[917]: INFO : mount: mount passed Feb 13 20:27:56.398461 ignition[917]: INFO : Ignition finished successfully Feb 13 20:27:56.400603 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:27:56.411401 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:27:56.846553 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:27:56.857465 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:27:56.864038 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (930) Feb 13 20:27:56.864070 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:27:56.864081 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:27:56.865692 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:27:56.868318 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:27:56.868897 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:27:56.883924 ignition[947]: INFO : Ignition 2.19.0 Feb 13 20:27:56.883924 ignition[947]: INFO : Stage: files Feb 13 20:27:56.885562 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:27:56.885562 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:27:56.885562 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:27:56.888995 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:27:56.888995 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:27:56.888995 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:27:56.888995 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:27:56.888995 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:27:56.888995 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 20:27:56.888995 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 13 20:27:56.887698 unknown[947]: wrote ssh authorized keys file for user: core Feb 13 20:27:57.228541 systemd-networkd[765]: eth0: Gained IPv6LL Feb 13 20:27:57.962003 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:27:58.199806 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 20:27:58.199806 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:27:58.204306 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:27:58.204306 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:27:58.204306 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:27:58.204306 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:27:58.204306 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:27:58.204306 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:27:58.204306 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:27:58.204306 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:27:58.204306 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:27:58.204306 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:27:58.204306 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:27:58.204306 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:27:58.204306 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 20:27:58.538878 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:27:58.846581 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:27:58.846581 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:27:58.851138 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:27:58.851138 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:27:58.851138 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:27:58.851138 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 20:27:58.851138 ignition[947]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:27:58.851138 ignition[947]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:27:58.851138 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 20:27:58.851138 ignition[947]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 20:27:58.880486 ignition[947]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:27:58.884754 ignition[947]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:27:58.887534 ignition[947]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 20:27:58.887534 ignition[947]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:27:58.887534 ignition[947]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:27:58.887534 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:27:58.887534 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:27:58.887534 ignition[947]: INFO : files: files passed Feb 13 20:27:58.887534 ignition[947]: INFO : Ignition finished successfully Feb 13 20:27:58.887916 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:27:58.909572 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:27:58.911657 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:27:58.915559 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:27:58.915651 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:27:58.920268 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 20:27:58.923733 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:27:58.923733 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:27:58.927461 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:27:58.927039 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:27:58.930050 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:27:58.952561 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:27:58.976539 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:27:58.976684 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:27:58.978968 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:27:58.981005 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:27:58.983044 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:27:58.983983 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:27:59.000405 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:27:59.017531 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:27:59.026639 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:27:59.028049 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:27:59.030269 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:27:59.032317 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:27:59.032447 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:27:59.035072 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:27:59.037192 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:27:59.039032 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:27:59.040887 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:27:59.042988 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:27:59.045151 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:27:59.047173 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:27:59.049180 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:27:59.051257 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:27:59.053586 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:27:59.055137 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:27:59.055288 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:27:59.057864 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:27:59.060026 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:27:59.062152 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:27:59.065415 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:27:59.066687 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:27:59.066829 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:27:59.069798 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:27:59.069925 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:27:59.072037 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:27:59.073724 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:27:59.077354 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:27:59.078758 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:27:59.081014 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:27:59.082602 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:27:59.082733 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:27:59.084273 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:27:59.084425 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:27:59.085947 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:27:59.086100 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:27:59.087891 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:27:59.088033 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:27:59.097521 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:27:59.099983 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:27:59.100843 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:27:59.101135 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:27:59.102923 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:27:59.103067 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:27:59.110511 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:27:59.110601 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:27:59.113994 ignition[1001]: INFO : Ignition 2.19.0 Feb 13 20:27:59.113994 ignition[1001]: INFO : Stage: umount Feb 13 20:27:59.113994 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:27:59.113994 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:27:59.113994 ignition[1001]: INFO : umount: umount passed Feb 13 20:27:59.113994 ignition[1001]: INFO : Ignition finished successfully Feb 13 20:27:59.113689 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:27:59.114161 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:27:59.114253 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:27:59.116320 systemd[1]: Stopped target network.target - Network. Feb 13 20:27:59.118492 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:27:59.118563 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:27:59.120727 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:27:59.120774 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:27:59.122400 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:27:59.122443 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:27:59.124743 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:27:59.124790 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:27:59.126621 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:27:59.128378 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:27:59.134340 systemd-networkd[765]: eth0: DHCPv6 lease lost Feb 13 20:27:59.135325 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:27:59.135448 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:27:59.137566 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:27:59.137691 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:27:59.140289 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:27:59.140376 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:27:59.146407 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:27:59.147727 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:27:59.147788 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:27:59.149797 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:27:59.149844 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:27:59.151734 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:27:59.151779 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:27:59.153614 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:27:59.153658 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:27:59.154927 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:27:59.174585 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:27:59.174723 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:27:59.177170 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:27:59.177270 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:27:59.179175 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:27:59.179272 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:27:59.181500 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:27:59.181549 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:27:59.182607 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:27:59.182640 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:27:59.184279 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:27:59.184339 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:27:59.186872 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:27:59.186917 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:27:59.189641 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:27:59.189692 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:27:59.192522 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:27:59.192566 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:27:59.203444 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:27:59.204463 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:27:59.204521 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:27:59.206657 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:27:59.206700 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:27:59.208663 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:27:59.208707 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:27:59.210839 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:27:59.210884 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:27:59.213201 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:27:59.214324 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:27:59.216791 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:27:59.219106 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:27:59.229550 systemd[1]: Switching root. Feb 13 20:27:59.260441 systemd-journald[237]: Journal stopped Feb 13 20:28:00.002868 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 20:28:00.002931 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:28:00.002947 kernel: SELinux: policy capability open_perms=1 Feb 13 20:28:00.002958 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:28:00.002967 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:28:00.002979 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:28:00.002989 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:28:00.003003 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:28:00.003014 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:28:00.003024 kernel: audit: type=1403 audit(1739478479.416:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:28:00.003036 systemd[1]: Successfully loaded SELinux policy in 39.128ms. Feb 13 20:28:00.003054 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.174ms. Feb 13 20:28:00.003066 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:28:00.003078 systemd[1]: Detected virtualization kvm. Feb 13 20:28:00.003089 systemd[1]: Detected architecture arm64. Feb 13 20:28:00.003099 systemd[1]: Detected first boot. Feb 13 20:28:00.003121 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:28:00.003132 zram_generator::config[1045]: No configuration found. Feb 13 20:28:00.003147 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:28:00.003160 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:28:00.003172 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:28:00.003183 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:28:00.003194 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:28:00.003205 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:28:00.003216 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:28:00.003226 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:28:00.003238 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:28:00.003251 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:28:00.003261 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:28:00.003272 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:28:00.003282 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:28:00.003293 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:28:00.003316 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:28:00.003328 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:28:00.003338 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:28:00.003350 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:28:00.003362 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 20:28:00.003387 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:28:00.003398 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:28:00.003408 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:28:00.003419 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:28:00.003430 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:28:00.003440 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:28:00.003451 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:28:00.003464 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:28:00.003475 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:28:00.003486 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:28:00.003498 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:28:00.003509 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:28:00.003520 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:28:00.003530 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:28:00.003541 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:28:00.003551 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:28:00.003563 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:28:00.003574 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:28:00.003584 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:28:00.003595 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:28:00.003606 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:28:00.003616 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:28:00.003627 systemd[1]: Reached target machines.target - Containers. Feb 13 20:28:00.003638 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:28:00.003651 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:28:00.003662 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:28:00.003673 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:28:00.003683 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:28:00.003694 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:28:00.003704 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:28:00.003717 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:28:00.003729 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:28:00.003740 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:28:00.003752 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:28:00.003762 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:28:00.003773 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:28:00.003783 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:28:00.003794 kernel: fuse: init (API version 7.39) Feb 13 20:28:00.003803 kernel: loop: module loaded Feb 13 20:28:00.003813 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:28:00.003824 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:28:00.003835 kernel: ACPI: bus type drm_connector registered Feb 13 20:28:00.003846 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:28:00.003857 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:28:00.003868 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:28:00.003879 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:28:00.003890 systemd[1]: Stopped verity-setup.service. Feb 13 20:28:00.003901 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:28:00.003913 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:28:00.003945 systemd-journald[1112]: Collecting audit messages is disabled. Feb 13 20:28:00.003985 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:28:00.003996 systemd-journald[1112]: Journal started Feb 13 20:28:00.004018 systemd-journald[1112]: Runtime Journal (/run/log/journal/edcd30bdd4f54f1a87c6020f2a6b1065) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:27:59.784412 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:27:59.802496 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:27:59.802849 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:28:00.006322 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:28:00.007929 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:28:00.009341 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:28:00.010645 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:28:00.013363 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:28:00.014832 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:28:00.016512 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:28:00.016688 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:28:00.018200 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:28:00.018406 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:28:00.019901 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:28:00.020043 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:28:00.021549 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:28:00.021702 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:28:00.023255 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:28:00.023424 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:28:00.024849 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:28:00.024996 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:28:00.026512 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:28:00.028180 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:28:00.030047 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:28:00.043324 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:28:00.051412 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:28:00.053676 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:28:00.054901 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:28:00.054949 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:28:00.057315 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:28:00.059553 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:28:00.061821 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:28:00.063014 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:28:00.064530 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:28:00.066525 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:28:00.067718 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:28:00.071482 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:28:00.073704 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:28:00.075862 systemd-journald[1112]: Time spent on flushing to /var/log/journal/edcd30bdd4f54f1a87c6020f2a6b1065 is 26.845ms for 854 entries. Feb 13 20:28:00.075862 systemd-journald[1112]: System Journal (/var/log/journal/edcd30bdd4f54f1a87c6020f2a6b1065) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:28:00.116202 systemd-journald[1112]: Received client request to flush runtime journal. Feb 13 20:28:00.078439 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:28:00.081193 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:28:00.083699 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:28:00.087390 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:28:00.088883 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:28:00.090249 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:28:00.091795 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:28:00.093585 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:28:00.105008 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:28:00.118529 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:28:00.127122 kernel: loop0: detected capacity change from 0 to 114328 Feb 13 20:28:00.124355 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:28:00.127345 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:28:00.131368 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:28:00.134115 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Feb 13 20:28:00.134132 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Feb 13 20:28:00.139669 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:28:00.142438 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:28:00.143223 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:28:00.149350 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:28:00.157567 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:28:00.159185 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:28:00.176405 kernel: loop1: detected capacity change from 0 to 201592 Feb 13 20:28:00.183934 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:28:00.194839 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:28:00.206420 kernel: loop2: detected capacity change from 0 to 114432 Feb 13 20:28:00.207955 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Feb 13 20:28:00.208331 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Feb 13 20:28:00.213573 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:28:00.249343 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 20:28:00.255320 kernel: loop4: detected capacity change from 0 to 201592 Feb 13 20:28:00.263340 kernel: loop5: detected capacity change from 0 to 114432 Feb 13 20:28:00.266942 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 20:28:00.267421 (sd-merge)[1183]: Merged extensions into '/usr'. Feb 13 20:28:00.274558 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:28:00.274576 systemd[1]: Reloading... Feb 13 20:28:00.345118 zram_generator::config[1209]: No configuration found. Feb 13 20:28:00.361541 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:28:00.433708 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:28:00.470566 systemd[1]: Reloading finished in 195 ms. Feb 13 20:28:00.503835 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:28:00.505409 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:28:00.516464 systemd[1]: Starting ensure-sysext.service... Feb 13 20:28:00.518351 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:28:00.530549 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:28:00.530563 systemd[1]: Reloading... Feb 13 20:28:00.537566 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:28:00.537818 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:28:00.538470 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:28:00.538681 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Feb 13 20:28:00.538732 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Feb 13 20:28:00.540791 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:28:00.540803 systemd-tmpfiles[1245]: Skipping /boot Feb 13 20:28:00.547766 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:28:00.547777 systemd-tmpfiles[1245]: Skipping /boot Feb 13 20:28:00.581333 zram_generator::config[1273]: No configuration found. Feb 13 20:28:00.661456 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:28:00.697773 systemd[1]: Reloading finished in 166 ms. Feb 13 20:28:00.711287 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:28:00.720716 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:28:00.728707 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:28:00.731444 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:28:00.733828 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:28:00.737548 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:28:00.744233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:28:00.749667 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:28:00.753059 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:28:00.756218 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:28:00.765191 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:28:00.769018 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:28:00.770124 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:28:00.771704 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:28:00.773452 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:28:00.775338 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:28:00.777285 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:28:00.777534 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:28:00.779483 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:28:00.779599 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:28:00.792351 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:28:00.794478 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:28:00.798882 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:28:00.805068 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:28:00.806221 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Feb 13 20:28:00.811570 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:28:00.818574 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:28:00.819805 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:28:00.825571 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:28:00.830504 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:28:00.831417 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:28:00.833620 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:28:00.836633 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:28:00.842856 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:28:00.843022 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:28:00.844610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:28:00.845075 augenrules[1361]: No rules Feb 13 20:28:00.845191 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:28:00.847364 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:28:00.850810 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:28:00.850940 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:28:00.853049 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:28:00.871321 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1366) Feb 13 20:28:00.874653 systemd[1]: Finished ensure-sysext.service. Feb 13 20:28:00.877984 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 20:28:00.881987 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:28:00.888508 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:28:00.890933 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:28:00.893455 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:28:00.896565 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:28:00.898559 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:28:00.907350 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:28:00.908017 systemd-resolved[1312]: Positive Trust Anchors: Feb 13 20:28:00.909830 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:28:00.909868 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:28:00.913456 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:28:00.914611 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:28:00.915101 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:28:00.915256 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:28:00.916965 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:28:00.917109 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:28:00.918773 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:28:00.918889 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:28:00.919113 systemd-resolved[1312]: Defaulting to hostname 'linux'. Feb 13 20:28:00.923753 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:28:00.923916 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:28:00.925158 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:28:00.934426 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:28:00.935716 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:28:00.935783 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:28:00.940127 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:28:00.949540 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:28:00.970580 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:28:00.972199 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:28:00.974149 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:28:00.981354 systemd-networkd[1387]: lo: Link UP Feb 13 20:28:00.981363 systemd-networkd[1387]: lo: Gained carrier Feb 13 20:28:00.982075 systemd-networkd[1387]: Enumeration completed Feb 13 20:28:00.982174 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:28:00.984228 systemd[1]: Reached target network.target - Network. Feb 13 20:28:00.985625 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:28:00.985637 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:28:00.986291 systemd-networkd[1387]: eth0: Link UP Feb 13 20:28:00.986715 systemd-networkd[1387]: eth0: Gained carrier Feb 13 20:28:00.986740 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:28:00.995541 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:28:01.001422 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:28:01.005356 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:28:01.006715 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Feb 13 20:28:01.007083 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:28:01.007529 systemd-timesyncd[1388]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 20:28:01.007589 systemd-timesyncd[1388]: Initial clock synchronization to Thu 2025-02-13 20:28:00.727883 UTC. Feb 13 20:28:01.010016 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:28:01.040321 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:28:01.044368 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:28:01.076286 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:28:01.078194 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:28:01.079376 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:28:01.080539 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:28:01.081772 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:28:01.083160 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:28:01.084338 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:28:01.085543 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:28:01.086741 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:28:01.086783 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:28:01.087799 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:28:01.089466 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:28:01.091815 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:28:01.106212 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:28:01.108546 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:28:01.110252 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:28:01.111505 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:28:01.112498 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:28:01.113503 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:28:01.113536 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:28:01.114448 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:28:01.117419 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:28:01.116465 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:28:01.120335 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:28:01.123592 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:28:01.125514 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:28:01.126521 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:28:01.130434 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:28:01.131402 jq[1416]: false Feb 13 20:28:01.133599 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:28:01.137938 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:28:01.142465 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:28:01.144709 dbus-daemon[1415]: [system] SELinux support is enabled Feb 13 20:28:01.145291 extend-filesystems[1417]: Found loop3 Feb 13 20:28:01.145291 extend-filesystems[1417]: Found loop4 Feb 13 20:28:01.145291 extend-filesystems[1417]: Found loop5 Feb 13 20:28:01.145291 extend-filesystems[1417]: Found vda Feb 13 20:28:01.145291 extend-filesystems[1417]: Found vda1 Feb 13 20:28:01.145291 extend-filesystems[1417]: Found vda2 Feb 13 20:28:01.145291 extend-filesystems[1417]: Found vda3 Feb 13 20:28:01.154374 extend-filesystems[1417]: Found usr Feb 13 20:28:01.154374 extend-filesystems[1417]: Found vda4 Feb 13 20:28:01.154374 extend-filesystems[1417]: Found vda6 Feb 13 20:28:01.154374 extend-filesystems[1417]: Found vda7 Feb 13 20:28:01.154374 extend-filesystems[1417]: Found vda9 Feb 13 20:28:01.154374 extend-filesystems[1417]: Checking size of /dev/vda9 Feb 13 20:28:01.149091 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:28:01.165772 extend-filesystems[1417]: Resized partition /dev/vda9 Feb 13 20:28:01.169457 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 20:28:01.169482 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1349) Feb 13 20:28:01.151344 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:28:01.169657 extend-filesystems[1436]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:28:01.152339 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:28:01.154467 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:28:01.182714 jq[1435]: true Feb 13 20:28:01.158806 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:28:01.163254 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:28:01.178709 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:28:01.178870 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:28:01.179145 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:28:01.179283 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:28:01.182892 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:28:01.183071 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:28:01.197346 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 20:28:01.208971 update_engine[1431]: I20250213 20:28:01.202042 1431 main.cc:92] Flatcar Update Engine starting Feb 13 20:28:01.204659 (ntainerd)[1443]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:28:01.205015 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:28:01.205047 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:28:01.206923 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:28:01.206942 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:28:01.210419 extend-filesystems[1436]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:28:01.210419 extend-filesystems[1436]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:28:01.210419 extend-filesystems[1436]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 20:28:01.214622 jq[1442]: true Feb 13 20:28:01.215450 extend-filesystems[1417]: Resized filesystem in /dev/vda9 Feb 13 20:28:01.215850 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:28:01.216024 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:28:01.220393 update_engine[1431]: I20250213 20:28:01.219209 1431 update_check_scheduler.cc:74] Next update check in 11m19s Feb 13 20:28:01.221887 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:28:01.223800 tar[1441]: linux-arm64/LICENSE Feb 13 20:28:01.224013 tar[1441]: linux-arm64/helm Feb 13 20:28:01.236044 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 20:28:01.237478 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:28:01.240706 systemd-logind[1425]: New seat seat0. Feb 13 20:28:01.243058 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:28:01.291286 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:28:01.293030 bash[1473]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:28:01.296341 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:28:01.298219 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:28:01.403747 containerd[1443]: time="2025-02-13T20:28:01.403664320Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:28:01.433896 containerd[1443]: time="2025-02-13T20:28:01.433788440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:01.436288 containerd[1443]: time="2025-02-13T20:28:01.435278760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:28:01.436288 containerd[1443]: time="2025-02-13T20:28:01.435326880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:28:01.436288 containerd[1443]: time="2025-02-13T20:28:01.435343160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:28:01.436288 containerd[1443]: time="2025-02-13T20:28:01.435497320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:28:01.436288 containerd[1443]: time="2025-02-13T20:28:01.435515480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:01.436288 containerd[1443]: time="2025-02-13T20:28:01.435567720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:28:01.436288 containerd[1443]: time="2025-02-13T20:28:01.435579640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:01.436288 containerd[1443]: time="2025-02-13T20:28:01.435730000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:28:01.436288 containerd[1443]: time="2025-02-13T20:28:01.435745400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:01.436288 containerd[1443]: time="2025-02-13T20:28:01.435757760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:28:01.436288 containerd[1443]: time="2025-02-13T20:28:01.435766920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:01.436530 containerd[1443]: time="2025-02-13T20:28:01.435845760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:01.436530 containerd[1443]: time="2025-02-13T20:28:01.436023280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:28:01.436530 containerd[1443]: time="2025-02-13T20:28:01.436128200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:28:01.436530 containerd[1443]: time="2025-02-13T20:28:01.436144360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:28:01.436530 containerd[1443]: time="2025-02-13T20:28:01.436215280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:28:01.436530 containerd[1443]: time="2025-02-13T20:28:01.436250960Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:28:01.440772 containerd[1443]: time="2025-02-13T20:28:01.440744640Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:28:01.440886 containerd[1443]: time="2025-02-13T20:28:01.440871760Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:28:01.441098 containerd[1443]: time="2025-02-13T20:28:01.441067160Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:28:01.441264 containerd[1443]: time="2025-02-13T20:28:01.441187120Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:28:01.441361 containerd[1443]: time="2025-02-13T20:28:01.441345760Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:28:01.441629 containerd[1443]: time="2025-02-13T20:28:01.441609160Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:28:01.442192 containerd[1443]: time="2025-02-13T20:28:01.442160000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:28:01.442425 containerd[1443]: time="2025-02-13T20:28:01.442405360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:28:01.442522 containerd[1443]: time="2025-02-13T20:28:01.442506040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:28:01.442604 containerd[1443]: time="2025-02-13T20:28:01.442589600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:28:01.442733 containerd[1443]: time="2025-02-13T20:28:01.442668840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:28:01.442733 containerd[1443]: time="2025-02-13T20:28:01.442687880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:28:01.442733 containerd[1443]: time="2025-02-13T20:28:01.442700560Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:28:01.442733 containerd[1443]: time="2025-02-13T20:28:01.442713680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:28:01.442994 containerd[1443]: time="2025-02-13T20:28:01.442921800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:28:01.442994 containerd[1443]: time="2025-02-13T20:28:01.442949200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:28:01.442994 containerd[1443]: time="2025-02-13T20:28:01.442961800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:28:01.442994 containerd[1443]: time="2025-02-13T20:28:01.442974760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:28:01.443231 containerd[1443]: time="2025-02-13T20:28:01.443117440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:28:01.443231 containerd[1443]: time="2025-02-13T20:28:01.443146960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:28:01.443231 containerd[1443]: time="2025-02-13T20:28:01.443160200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:28:01.443231 containerd[1443]: time="2025-02-13T20:28:01.443181960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:28:01.444342 containerd[1443]: time="2025-02-13T20:28:01.444315600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:28:01.444568 containerd[1443]: time="2025-02-13T20:28:01.444547760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:28:01.444633 containerd[1443]: time="2025-02-13T20:28:01.444619800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:28:01.444699 containerd[1443]: time="2025-02-13T20:28:01.444684840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:28:01.444818 containerd[1443]: time="2025-02-13T20:28:01.444797680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:28:01.444879 containerd[1443]: time="2025-02-13T20:28:01.444866520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:28:01.444928 containerd[1443]: time="2025-02-13T20:28:01.444917000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:28:01.445026 containerd[1443]: time="2025-02-13T20:28:01.445010840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:28:01.445092 containerd[1443]: time="2025-02-13T20:28:01.445071000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:28:01.445176 containerd[1443]: time="2025-02-13T20:28:01.445156200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:28:01.445285 containerd[1443]: time="2025-02-13T20:28:01.445268240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:28:01.445362 containerd[1443]: time="2025-02-13T20:28:01.445349600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:28:01.445467 containerd[1443]: time="2025-02-13T20:28:01.445452760Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:28:01.446370 containerd[1443]: time="2025-02-13T20:28:01.446342160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:28:01.446457 containerd[1443]: time="2025-02-13T20:28:01.446442080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:28:01.446504 containerd[1443]: time="2025-02-13T20:28:01.446493120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:28:01.446623 containerd[1443]: time="2025-02-13T20:28:01.446556200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:28:01.446683 containerd[1443]: time="2025-02-13T20:28:01.446670600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:28:01.446739 containerd[1443]: time="2025-02-13T20:28:01.446728360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:28:01.446788 containerd[1443]: time="2025-02-13T20:28:01.446776280Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:28:01.446918 containerd[1443]: time="2025-02-13T20:28:01.446899760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:28:01.448130 containerd[1443]: time="2025-02-13T20:28:01.447413480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:28:01.448130 containerd[1443]: time="2025-02-13T20:28:01.447506480Z" level=info msg="Connect containerd service" Feb 13 20:28:01.448130 containerd[1443]: time="2025-02-13T20:28:01.447538440Z" level=info msg="using legacy CRI server" Feb 13 20:28:01.448130 containerd[1443]: time="2025-02-13T20:28:01.447545640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:28:01.448130 containerd[1443]: time="2025-02-13T20:28:01.447636440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:28:01.448533 containerd[1443]: time="2025-02-13T20:28:01.448505080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:28:01.448761 containerd[1443]: time="2025-02-13T20:28:01.448685400Z" level=info msg="Start subscribing containerd event" Feb 13 20:28:01.448761 containerd[1443]: time="2025-02-13T20:28:01.448745600Z" level=info msg="Start recovering state" Feb 13 20:28:01.448966 containerd[1443]: time="2025-02-13T20:28:01.448918760Z" level=info msg="Start event monitor" Feb 13 20:28:01.448966 containerd[1443]: time="2025-02-13T20:28:01.448941840Z" level=info msg="Start snapshots syncer" Feb 13 20:28:01.449143 containerd[1443]: time="2025-02-13T20:28:01.448956440Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:28:01.449143 containerd[1443]: time="2025-02-13T20:28:01.449056360Z" level=info msg="Start streaming server" Feb 13 20:28:01.449260 containerd[1443]: time="2025-02-13T20:28:01.449185800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:28:01.450330 containerd[1443]: time="2025-02-13T20:28:01.449330720Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:28:01.449477 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:28:01.451088 containerd[1443]: time="2025-02-13T20:28:01.450993000Z" level=info msg="containerd successfully booted in 0.048623s" Feb 13 20:28:01.592733 tar[1441]: linux-arm64/README.md Feb 13 20:28:01.604822 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:28:02.072216 sshd_keygen[1438]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:28:02.090344 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:28:02.099575 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:28:02.104449 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:28:02.104656 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:28:02.107097 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:28:02.119210 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:28:02.121843 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:28:02.123854 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 20:28:02.125093 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:28:02.476407 systemd-networkd[1387]: eth0: Gained IPv6LL Feb 13 20:28:02.479114 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:28:02.481065 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:28:02.497509 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 20:28:02.499787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:02.501795 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:28:02.515277 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 20:28:02.516118 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 20:28:02.518050 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:28:02.522030 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:28:03.015067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:03.016885 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:28:03.018413 (kubelet)[1527]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:28:03.018613 systemd[1]: Startup finished in 590ms (kernel) + 5.682s (initrd) + 3.645s (userspace) = 9.918s. Feb 13 20:28:03.403201 kubelet[1527]: E0213 20:28:03.403087 1527 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:28:03.405524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:28:03.405664 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:28:05.952063 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:28:05.953215 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:52612.service - OpenSSH per-connection server daemon (10.0.0.1:52612). Feb 13 20:28:06.004478 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 52612 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:28:06.006148 sshd[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:06.013250 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:28:06.028531 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:28:06.029966 systemd-logind[1425]: New session 1 of user core. Feb 13 20:28:06.037508 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:28:06.039589 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:28:06.045912 (systemd)[1545]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:28:06.130038 systemd[1545]: Queued start job for default target default.target. Feb 13 20:28:06.141132 systemd[1545]: Created slice app.slice - User Application Slice. Feb 13 20:28:06.141174 systemd[1545]: Reached target paths.target - Paths. Feb 13 20:28:06.141186 systemd[1545]: Reached target timers.target - Timers. Feb 13 20:28:06.142366 systemd[1545]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:28:06.151032 systemd[1545]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:28:06.151089 systemd[1545]: Reached target sockets.target - Sockets. Feb 13 20:28:06.151100 systemd[1545]: Reached target basic.target - Basic System. Feb 13 20:28:06.151133 systemd[1545]: Reached target default.target - Main User Target. Feb 13 20:28:06.151157 systemd[1545]: Startup finished in 100ms. Feb 13 20:28:06.151364 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:28:06.152711 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:28:06.216808 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:52618.service - OpenSSH per-connection server daemon (10.0.0.1:52618). Feb 13 20:28:06.253867 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 52618 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:28:06.255159 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:06.258733 systemd-logind[1425]: New session 2 of user core. Feb 13 20:28:06.272431 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:28:06.323223 sshd[1556]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:06.331396 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:52618.service: Deactivated successfully. Feb 13 20:28:06.332514 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:28:06.334363 systemd-logind[1425]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:28:06.335485 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:52626.service - OpenSSH per-connection server daemon (10.0.0.1:52626). Feb 13 20:28:06.336122 systemd-logind[1425]: Removed session 2. Feb 13 20:28:06.371769 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 52626 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:28:06.372861 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:06.376018 systemd-logind[1425]: New session 3 of user core. Feb 13 20:28:06.384466 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:28:06.430623 sshd[1563]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:06.439521 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:52626.service: Deactivated successfully. Feb 13 20:28:06.440948 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:28:06.442979 systemd-logind[1425]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:28:06.443377 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:52634.service - OpenSSH per-connection server daemon (10.0.0.1:52634). Feb 13 20:28:06.444543 systemd-logind[1425]: Removed session 3. Feb 13 20:28:06.480351 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 52634 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:28:06.481544 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:06.486475 systemd-logind[1425]: New session 4 of user core. Feb 13 20:28:06.498424 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:28:06.548505 sshd[1570]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:06.558513 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:52634.service: Deactivated successfully. Feb 13 20:28:06.559810 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:28:06.560933 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:28:06.561980 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:52648.service - OpenSSH per-connection server daemon (10.0.0.1:52648). Feb 13 20:28:06.562698 systemd-logind[1425]: Removed session 4. Feb 13 20:28:06.598132 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 52648 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:28:06.599258 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:06.602874 systemd-logind[1425]: New session 5 of user core. Feb 13 20:28:06.611415 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:28:06.672419 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:28:06.672693 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:28:06.985538 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:28:06.985659 (dockerd)[1599]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:28:07.261390 dockerd[1599]: time="2025-02-13T20:28:07.260153687Z" level=info msg="Starting up" Feb 13 20:28:07.333761 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1630946554-merged.mount: Deactivated successfully. Feb 13 20:28:07.452398 dockerd[1599]: time="2025-02-13T20:28:07.452353195Z" level=info msg="Loading containers: start." Feb 13 20:28:07.541319 kernel: Initializing XFRM netlink socket Feb 13 20:28:07.605891 systemd-networkd[1387]: docker0: Link UP Feb 13 20:28:07.622496 dockerd[1599]: time="2025-02-13T20:28:07.622395920Z" level=info msg="Loading containers: done." Feb 13 20:28:07.637545 dockerd[1599]: time="2025-02-13T20:28:07.637495699Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:28:07.637673 dockerd[1599]: time="2025-02-13T20:28:07.637598974Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:28:07.637725 dockerd[1599]: time="2025-02-13T20:28:07.637707199Z" level=info msg="Daemon has completed initialization" Feb 13 20:28:07.664756 dockerd[1599]: time="2025-02-13T20:28:07.664630353Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:28:07.664846 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:28:08.162675 containerd[1443]: time="2025-02-13T20:28:08.162634447Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 20:28:08.895559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount565239791.mount: Deactivated successfully. Feb 13 20:28:10.509599 containerd[1443]: time="2025-02-13T20:28:10.509550804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:10.510362 containerd[1443]: time="2025-02-13T20:28:10.510096927Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218238" Feb 13 20:28:10.511032 containerd[1443]: time="2025-02-13T20:28:10.510997581Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:10.513928 containerd[1443]: time="2025-02-13T20:28:10.513895284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:10.515380 containerd[1443]: time="2025-02-13T20:28:10.515341034Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 2.352661819s" Feb 13 20:28:10.515380 containerd[1443]: time="2025-02-13T20:28:10.515377985Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\"" Feb 13 20:28:10.516017 containerd[1443]: time="2025-02-13T20:28:10.515988406Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 20:28:12.552844 containerd[1443]: time="2025-02-13T20:28:12.552775495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:12.553272 containerd[1443]: time="2025-02-13T20:28:12.553217355Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528147" Feb 13 20:28:12.554254 containerd[1443]: time="2025-02-13T20:28:12.554218387Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:12.557107 containerd[1443]: time="2025-02-13T20:28:12.557056018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:12.558276 containerd[1443]: time="2025-02-13T20:28:12.558243600Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 2.042219599s" Feb 13 20:28:12.558334 containerd[1443]: time="2025-02-13T20:28:12.558280339Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\"" Feb 13 20:28:12.559188 containerd[1443]: time="2025-02-13T20:28:12.559166477Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 20:28:13.559686 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:28:13.573592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:13.706468 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:13.710056 (kubelet)[1814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:28:13.743912 kubelet[1814]: E0213 20:28:13.743574 1814 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:28:13.748213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:28:13.748375 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:28:14.374955 containerd[1443]: time="2025-02-13T20:28:14.374903696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:14.376519 containerd[1443]: time="2025-02-13T20:28:14.376452955Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480802" Feb 13 20:28:14.377436 containerd[1443]: time="2025-02-13T20:28:14.377404450Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:14.380192 containerd[1443]: time="2025-02-13T20:28:14.380158210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:14.381214 containerd[1443]: time="2025-02-13T20:28:14.381179253Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 1.821983112s" Feb 13 20:28:14.381250 containerd[1443]: time="2025-02-13T20:28:14.381213570Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\"" Feb 13 20:28:14.381864 containerd[1443]: time="2025-02-13T20:28:14.381591292Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 20:28:15.661089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3018264241.mount: Deactivated successfully. Feb 13 20:28:15.988264 containerd[1443]: time="2025-02-13T20:28:15.988149358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:15.989016 containerd[1443]: time="2025-02-13T20:28:15.988992930Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363384" Feb 13 20:28:15.989976 containerd[1443]: time="2025-02-13T20:28:15.989919827Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:15.992447 containerd[1443]: time="2025-02-13T20:28:15.992402681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:15.993088 containerd[1443]: time="2025-02-13T20:28:15.992956767Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.611245833s" Feb 13 20:28:15.993088 containerd[1443]: time="2025-02-13T20:28:15.992990240Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 20:28:15.993547 containerd[1443]: time="2025-02-13T20:28:15.993521706Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 20:28:16.605342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4218012498.mount: Deactivated successfully. Feb 13 20:28:17.473124 containerd[1443]: time="2025-02-13T20:28:17.473071413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:17.473682 containerd[1443]: time="2025-02-13T20:28:17.473652699Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Feb 13 20:28:17.478453 containerd[1443]: time="2025-02-13T20:28:17.478403950Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:17.493384 containerd[1443]: time="2025-02-13T20:28:17.493282899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:17.495110 containerd[1443]: time="2025-02-13T20:28:17.495078473Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.501521529s" Feb 13 20:28:17.495327 containerd[1443]: time="2025-02-13T20:28:17.495196832Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Feb 13 20:28:17.495925 containerd[1443]: time="2025-02-13T20:28:17.495776088Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:28:17.994239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1999402612.mount: Deactivated successfully. Feb 13 20:28:17.998342 containerd[1443]: time="2025-02-13T20:28:17.998135116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:17.999203 containerd[1443]: time="2025-02-13T20:28:17.999164519Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 20:28:18.000097 containerd[1443]: time="2025-02-13T20:28:18.000057290Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:18.003869 containerd[1443]: time="2025-02-13T20:28:18.002687766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:18.003869 containerd[1443]: time="2025-02-13T20:28:18.003488847Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 507.683892ms" Feb 13 20:28:18.003869 containerd[1443]: time="2025-02-13T20:28:18.003514103Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 20:28:18.004221 containerd[1443]: time="2025-02-13T20:28:18.004195281Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 20:28:18.588945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount400637323.mount: Deactivated successfully. Feb 13 20:28:21.806866 containerd[1443]: time="2025-02-13T20:28:21.806823079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:21.807362 containerd[1443]: time="2025-02-13T20:28:21.807326843Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Feb 13 20:28:21.808208 containerd[1443]: time="2025-02-13T20:28:21.808185902Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:21.811480 containerd[1443]: time="2025-02-13T20:28:21.811443951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:21.814132 containerd[1443]: time="2025-02-13T20:28:21.814078968Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.809849016s" Feb 13 20:28:21.814132 containerd[1443]: time="2025-02-13T20:28:21.814120413Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Feb 13 20:28:23.809881 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:28:23.819612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:23.924243 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:28:23.924340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:23.972110 kubelet[1976]: E0213 20:28:23.972055 1976 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:28:23.974218 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:28:23.974368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:28:26.013569 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:26.024508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:26.049247 systemd[1]: Reloading requested from client PID 1991 ('systemctl') (unit session-5.scope)... Feb 13 20:28:26.049264 systemd[1]: Reloading... Feb 13 20:28:26.110348 zram_generator::config[2030]: No configuration found. Feb 13 20:28:26.399517 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:28:26.451671 systemd[1]: Reloading finished in 402 ms. Feb 13 20:28:26.499776 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:28:26.499837 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:28:26.500027 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:26.502039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:26.604660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:26.608731 (kubelet)[2076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:28:26.643614 kubelet[2076]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:28:26.643614 kubelet[2076]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:28:26.643614 kubelet[2076]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:28:26.643963 kubelet[2076]: I0213 20:28:26.643671 2076 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:28:27.621326 kubelet[2076]: I0213 20:28:27.620543 2076 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:28:27.621326 kubelet[2076]: I0213 20:28:27.620577 2076 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:28:27.621326 kubelet[2076]: I0213 20:28:27.620867 2076 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:28:27.668939 kubelet[2076]: E0213 20:28:27.668890 2076 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:28:27.670774 kubelet[2076]: I0213 20:28:27.670742 2076 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:28:27.680086 kubelet[2076]: E0213 20:28:27.680040 2076 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:28:27.680086 kubelet[2076]: I0213 20:28:27.680069 2076 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:28:27.682611 kubelet[2076]: I0213 20:28:27.682593 2076 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:28:27.683252 kubelet[2076]: I0213 20:28:27.683207 2076 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:28:27.683460 kubelet[2076]: I0213 20:28:27.683249 2076 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:28:27.683540 kubelet[2076]: I0213 20:28:27.683531 2076 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:28:27.683573 kubelet[2076]: I0213 20:28:27.683542 2076 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:28:27.683749 kubelet[2076]: I0213 20:28:27.683734 2076 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:28:27.687972 kubelet[2076]: I0213 20:28:27.687928 2076 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:28:27.687972 kubelet[2076]: I0213 20:28:27.687954 2076 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:28:27.687972 kubelet[2076]: I0213 20:28:27.687974 2076 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:28:27.688115 kubelet[2076]: I0213 20:28:27.687984 2076 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:28:27.690950 kubelet[2076]: W0213 20:28:27.690894 2076 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:28:27.690989 kubelet[2076]: E0213 20:28:27.690954 2076 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:28:27.691597 kubelet[2076]: W0213 20:28:27.691546 2076 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:28:27.691634 kubelet[2076]: E0213 20:28:27.691597 2076 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:28:27.693188 kubelet[2076]: I0213 20:28:27.692723 2076 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:28:27.693323 kubelet[2076]: I0213 20:28:27.693277 2076 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:28:27.693428 kubelet[2076]: W0213 20:28:27.693415 2076 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:28:27.694215 kubelet[2076]: I0213 20:28:27.694188 2076 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:28:27.694275 kubelet[2076]: I0213 20:28:27.694225 2076 server.go:1287] "Started kubelet" Feb 13 20:28:27.697924 kubelet[2076]: I0213 20:28:27.695911 2076 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:28:27.697924 kubelet[2076]: I0213 20:28:27.696112 2076 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:28:27.697924 kubelet[2076]: I0213 20:28:27.696168 2076 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:28:27.697924 kubelet[2076]: I0213 20:28:27.696218 2076 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:28:27.697924 kubelet[2076]: I0213 20:28:27.697077 2076 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:28:27.700896 kubelet[2076]: I0213 20:28:27.700130 2076 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:28:27.700989 kubelet[2076]: I0213 20:28:27.700968 2076 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:28:27.701893 kubelet[2076]: E0213 20:28:27.701870 2076 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:28:27.702170 kubelet[2076]: E0213 20:28:27.702127 2076 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Feb 13 20:28:27.703426 kubelet[2076]: E0213 20:28:27.703072 2076 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823de7c918a1ebf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:28:27.694202559 +0000 UTC m=+1.082465113,LastTimestamp:2025-02-13 20:28:27.694202559 +0000 UTC m=+1.082465113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:28:27.703535 kubelet[2076]: E0213 20:28:27.703483 2076 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:28:27.703535 kubelet[2076]: I0213 20:28:27.703521 2076 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:28:27.703631 kubelet[2076]: I0213 20:28:27.703572 2076 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:28:27.703897 kubelet[2076]: W0213 20:28:27.703857 2076 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:28:27.703965 kubelet[2076]: E0213 20:28:27.703901 2076 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:28:27.705871 kubelet[2076]: I0213 20:28:27.705845 2076 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:28:27.707183 kubelet[2076]: I0213 20:28:27.707161 2076 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:28:27.707183 kubelet[2076]: I0213 20:28:27.707176 2076 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:28:27.714682 kubelet[2076]: I0213 20:28:27.714645 2076 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:28:27.715788 kubelet[2076]: I0213 20:28:27.715759 2076 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:28:27.715898 kubelet[2076]: I0213 20:28:27.715886 2076 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:28:27.715958 kubelet[2076]: I0213 20:28:27.715949 2076 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:28:27.716016 kubelet[2076]: I0213 20:28:27.716007 2076 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:28:27.716123 kubelet[2076]: E0213 20:28:27.716105 2076 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:28:27.717614 kubelet[2076]: W0213 20:28:27.717552 2076 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:28:27.717739 kubelet[2076]: E0213 20:28:27.717720 2076 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:28:27.718969 kubelet[2076]: I0213 20:28:27.718948 2076 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:28:27.718969 kubelet[2076]: I0213 20:28:27.718964 2076 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:28:27.718969 kubelet[2076]: I0213 20:28:27.718979 2076 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:28:27.720665 kubelet[2076]: I0213 20:28:27.720639 2076 policy_none.go:49] "None policy: Start" Feb 13 20:28:27.720665 kubelet[2076]: I0213 20:28:27.720660 2076 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:28:27.720740 kubelet[2076]: I0213 20:28:27.720672 2076 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:28:27.725311 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:28:27.746666 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:28:27.749182 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:28:27.761051 kubelet[2076]: I0213 20:28:27.761023 2076 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:28:27.761427 kubelet[2076]: I0213 20:28:27.761213 2076 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:28:27.761427 kubelet[2076]: I0213 20:28:27.761234 2076 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:28:27.761773 kubelet[2076]: I0213 20:28:27.761694 2076 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:28:27.762808 kubelet[2076]: E0213 20:28:27.762774 2076 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:28:27.762875 kubelet[2076]: E0213 20:28:27.762822 2076 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 20:28:27.823262 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 20:28:27.837996 kubelet[2076]: E0213 20:28:27.837956 2076 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:28:27.840623 systemd[1]: Created slice kubepods-burstable-pod84f635b79f6f22dd358ac5ed607d1438.slice - libcontainer container kubepods-burstable-pod84f635b79f6f22dd358ac5ed607d1438.slice. Feb 13 20:28:27.857403 kubelet[2076]: E0213 20:28:27.857326 2076 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:28:27.858801 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 20:28:27.860453 kubelet[2076]: E0213 20:28:27.860413 2076 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:28:27.862336 kubelet[2076]: I0213 20:28:27.862309 2076 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:28:27.862868 kubelet[2076]: E0213 20:28:27.862826 2076 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Feb 13 20:28:27.903455 kubelet[2076]: E0213 20:28:27.903328 2076 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Feb 13 20:28:27.905698 kubelet[2076]: I0213 20:28:27.905543 2076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:27.905698 kubelet[2076]: I0213 20:28:27.905585 2076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:27.905698 kubelet[2076]: I0213 20:28:27.905602 2076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:27.905698 kubelet[2076]: I0213 20:28:27.905618 2076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:28:27.905698 kubelet[2076]: I0213 20:28:27.905633 2076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84f635b79f6f22dd358ac5ed607d1438-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"84f635b79f6f22dd358ac5ed607d1438\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:27.905871 kubelet[2076]: I0213 20:28:27.905648 2076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84f635b79f6f22dd358ac5ed607d1438-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"84f635b79f6f22dd358ac5ed607d1438\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:27.905871 kubelet[2076]: I0213 20:28:27.905664 2076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:27.905871 kubelet[2076]: I0213 20:28:27.905682 2076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:27.905871 kubelet[2076]: I0213 20:28:27.905696 2076 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84f635b79f6f22dd358ac5ed607d1438-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"84f635b79f6f22dd358ac5ed607d1438\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:28.064856 kubelet[2076]: I0213 20:28:28.064830 2076 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:28:28.065155 kubelet[2076]: E0213 20:28:28.065130 2076 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Feb 13 20:28:28.138794 kubelet[2076]: E0213 20:28:28.138761 2076 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:28.141208 containerd[1443]: time="2025-02-13T20:28:28.141133299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 20:28:28.158077 kubelet[2076]: E0213 20:28:28.157989 2076 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:28.159061 containerd[1443]: time="2025-02-13T20:28:28.159030891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:84f635b79f6f22dd358ac5ed607d1438,Namespace:kube-system,Attempt:0,}" Feb 13 20:28:28.161592 kubelet[2076]: E0213 20:28:28.161556 2076 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:28.161889 containerd[1443]: time="2025-02-13T20:28:28.161859937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 20:28:28.304729 kubelet[2076]: E0213 20:28:28.304700 2076 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Feb 13 20:28:28.466484 kubelet[2076]: I0213 20:28:28.466407 2076 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:28:28.466728 kubelet[2076]: E0213 20:28:28.466694 2076 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Feb 13 20:28:28.553694 kubelet[2076]: W0213 20:28:28.553579 2076 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:28:28.553694 kubelet[2076]: E0213 20:28:28.553657 2076 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:28:28.604456 kubelet[2076]: W0213 20:28:28.604361 2076 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:28:28.604456 kubelet[2076]: E0213 20:28:28.604426 2076 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:28:28.763584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount828259467.mount: Deactivated successfully. Feb 13 20:28:28.769254 containerd[1443]: time="2025-02-13T20:28:28.769203884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:28:28.769960 containerd[1443]: time="2025-02-13T20:28:28.769912994Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:28:28.770770 containerd[1443]: time="2025-02-13T20:28:28.770705533Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:28:28.772385 containerd[1443]: time="2025-02-13T20:28:28.772350905Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:28:28.773340 containerd[1443]: time="2025-02-13T20:28:28.773311181Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:28:28.774918 containerd[1443]: time="2025-02-13T20:28:28.774811391Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:28:28.775690 containerd[1443]: time="2025-02-13T20:28:28.775660908Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 20:28:28.776482 containerd[1443]: time="2025-02-13T20:28:28.776420563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:28:28.777580 containerd[1443]: time="2025-02-13T20:28:28.777491719Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 615.570368ms" Feb 13 20:28:28.778950 containerd[1443]: time="2025-02-13T20:28:28.778913894Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 619.823866ms" Feb 13 20:28:28.781620 containerd[1443]: time="2025-02-13T20:28:28.781582434Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 639.890382ms" Feb 13 20:28:28.797430 kubelet[2076]: W0213 20:28:28.797104 2076 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:28:28.797430 kubelet[2076]: E0213 20:28:28.797178 2076 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:28:28.917528 containerd[1443]: time="2025-02-13T20:28:28.917366453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:28:28.917528 containerd[1443]: time="2025-02-13T20:28:28.917455156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:28:28.917528 containerd[1443]: time="2025-02-13T20:28:28.917480089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:28.917835 containerd[1443]: time="2025-02-13T20:28:28.917596123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:28.918260 containerd[1443]: time="2025-02-13T20:28:28.918175853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:28:28.918260 containerd[1443]: time="2025-02-13T20:28:28.918232831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:28:28.918351 containerd[1443]: time="2025-02-13T20:28:28.918256685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:28.918402 containerd[1443]: time="2025-02-13T20:28:28.918337797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:28:28.918402 containerd[1443]: time="2025-02-13T20:28:28.918391419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:28:28.918402 containerd[1443]: time="2025-02-13T20:28:28.918394256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:28.918483 containerd[1443]: time="2025-02-13T20:28:28.918402886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:28.918508 containerd[1443]: time="2025-02-13T20:28:28.918483998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:28.942482 systemd[1]: Started cri-containerd-5d4df52ed08af4d16d77bce06be46ab7a1f11dd128030f9e30a3aec310cdadee.scope - libcontainer container 5d4df52ed08af4d16d77bce06be46ab7a1f11dd128030f9e30a3aec310cdadee. Feb 13 20:28:28.943563 systemd[1]: Started cri-containerd-ff258226dc696bc94dc810bb434d743ed4655138fa28423bc0ed2a386e73ac97.scope - libcontainer container ff258226dc696bc94dc810bb434d743ed4655138fa28423bc0ed2a386e73ac97. Feb 13 20:28:28.946819 systemd[1]: Started cri-containerd-e5d77f8572a052e9dd10ac4bfa41e6efab340c9f48ef0c945badc5e8ca247cfb.scope - libcontainer container e5d77f8572a052e9dd10ac4bfa41e6efab340c9f48ef0c945badc5e8ca247cfb. Feb 13 20:28:28.972085 containerd[1443]: time="2025-02-13T20:28:28.972038247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:84f635b79f6f22dd358ac5ed607d1438,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d4df52ed08af4d16d77bce06be46ab7a1f11dd128030f9e30a3aec310cdadee\"" Feb 13 20:28:28.973806 kubelet[2076]: E0213 20:28:28.973782 2076 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:28.986916 containerd[1443]: time="2025-02-13T20:28:28.981877476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff258226dc696bc94dc810bb434d743ed4655138fa28423bc0ed2a386e73ac97\"" Feb 13 20:28:28.986916 containerd[1443]: time="2025-02-13T20:28:28.982697385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5d77f8572a052e9dd10ac4bfa41e6efab340c9f48ef0c945badc5e8ca247cfb\"" Feb 13 20:28:28.987078 kubelet[2076]: E0213 20:28:28.982835 2076 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:28.987178 containerd[1443]: time="2025-02-13T20:28:28.987140517Z" level=info msg="CreateContainer within sandbox \"5d4df52ed08af4d16d77bce06be46ab7a1f11dd128030f9e30a3aec310cdadee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:28:28.987494 kubelet[2076]: E0213 20:28:28.987472 2076 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:28.988811 containerd[1443]: time="2025-02-13T20:28:28.988778417Z" level=info msg="CreateContainer within sandbox \"e5d77f8572a052e9dd10ac4bfa41e6efab340c9f48ef0c945badc5e8ca247cfb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:28:28.990613 containerd[1443]: time="2025-02-13T20:28:28.990514691Z" level=info msg="CreateContainer within sandbox \"ff258226dc696bc94dc810bb434d743ed4655138fa28423bc0ed2a386e73ac97\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:28:29.008695 containerd[1443]: time="2025-02-13T20:28:29.008658217Z" level=info msg="CreateContainer within sandbox \"e5d77f8572a052e9dd10ac4bfa41e6efab340c9f48ef0c945badc5e8ca247cfb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a5765304f07d3d41745320005d031d13a4b20442a60617555cce6a2aa7ab6386\"" Feb 13 20:28:29.009366 containerd[1443]: time="2025-02-13T20:28:29.009251173Z" level=info msg="StartContainer for \"a5765304f07d3d41745320005d031d13a4b20442a60617555cce6a2aa7ab6386\"" Feb 13 20:28:29.009440 containerd[1443]: time="2025-02-13T20:28:29.009419853Z" level=info msg="CreateContainer within sandbox \"5d4df52ed08af4d16d77bce06be46ab7a1f11dd128030f9e30a3aec310cdadee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f595d42be1439da30fb8781dac1b35493d8f122deea0856e71e3af64b2752136\"" Feb 13 20:28:29.009957 containerd[1443]: time="2025-02-13T20:28:29.009917220Z" level=info msg="StartContainer for \"f595d42be1439da30fb8781dac1b35493d8f122deea0856e71e3af64b2752136\"" Feb 13 20:28:29.010663 containerd[1443]: time="2025-02-13T20:28:29.010572277Z" level=info msg="CreateContainer within sandbox \"ff258226dc696bc94dc810bb434d743ed4655138fa28423bc0ed2a386e73ac97\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1a9016502abee23e05524100b854900f78bd5bc5c9ef7c556afa73741748cf51\"" Feb 13 20:28:29.010913 containerd[1443]: time="2025-02-13T20:28:29.010885140Z" level=info msg="StartContainer for \"1a9016502abee23e05524100b854900f78bd5bc5c9ef7c556afa73741748cf51\"" Feb 13 20:28:29.049474 systemd[1]: Started cri-containerd-a5765304f07d3d41745320005d031d13a4b20442a60617555cce6a2aa7ab6386.scope - libcontainer container a5765304f07d3d41745320005d031d13a4b20442a60617555cce6a2aa7ab6386. Feb 13 20:28:29.051047 systemd[1]: Started cri-containerd-f595d42be1439da30fb8781dac1b35493d8f122deea0856e71e3af64b2752136.scope - libcontainer container f595d42be1439da30fb8781dac1b35493d8f122deea0856e71e3af64b2752136. Feb 13 20:28:29.055427 systemd[1]: Started cri-containerd-1a9016502abee23e05524100b854900f78bd5bc5c9ef7c556afa73741748cf51.scope - libcontainer container 1a9016502abee23e05524100b854900f78bd5bc5c9ef7c556afa73741748cf51. Feb 13 20:28:29.108944 kubelet[2076]: E0213 20:28:29.105577 2076 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="1.6s" Feb 13 20:28:29.116806 containerd[1443]: time="2025-02-13T20:28:29.116635130Z" level=info msg="StartContainer for \"f595d42be1439da30fb8781dac1b35493d8f122deea0856e71e3af64b2752136\" returns successfully" Feb 13 20:28:29.116887 containerd[1443]: time="2025-02-13T20:28:29.116854601Z" level=info msg="StartContainer for \"1a9016502abee23e05524100b854900f78bd5bc5c9ef7c556afa73741748cf51\" returns successfully" Feb 13 20:28:29.116935 containerd[1443]: time="2025-02-13T20:28:29.116895443Z" level=info msg="StartContainer for \"a5765304f07d3d41745320005d031d13a4b20442a60617555cce6a2aa7ab6386\" returns successfully" Feb 13 20:28:29.175969 kubelet[2076]: W0213 20:28:29.171243 2076 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:28:29.175969 kubelet[2076]: E0213 20:28:29.171285 2076 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:28:29.270731 kubelet[2076]: I0213 20:28:29.270688 2076 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:28:29.271413 kubelet[2076]: E0213 20:28:29.271286 2076 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Feb 13 20:28:29.735323 kubelet[2076]: E0213 20:28:29.735282 2076 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:28:29.735443 kubelet[2076]: E0213 20:28:29.735424 2076 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:29.739485 kubelet[2076]: E0213 20:28:29.739461 2076 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:28:29.739583 kubelet[2076]: E0213 20:28:29.739567 2076 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:29.739894 kubelet[2076]: E0213 20:28:29.739848 2076 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:28:29.740018 kubelet[2076]: E0213 20:28:29.740002 2076 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:30.745888 kubelet[2076]: E0213 20:28:30.745860 2076 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:28:30.746236 kubelet[2076]: E0213 20:28:30.745982 2076 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:30.746236 kubelet[2076]: E0213 20:28:30.746177 2076 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:28:30.746283 kubelet[2076]: E0213 20:28:30.746247 2076 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:30.872677 kubelet[2076]: I0213 20:28:30.872445 2076 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:28:30.915856 kubelet[2076]: E0213 20:28:30.915817 2076 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 20:28:30.988839 kubelet[2076]: I0213 20:28:30.988290 2076 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 20:28:31.002498 kubelet[2076]: I0213 20:28:31.002375 2076 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:31.008716 kubelet[2076]: E0213 20:28:31.008672 2076 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:31.008716 kubelet[2076]: I0213 20:28:31.008706 2076 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 20:28:31.010322 kubelet[2076]: E0213 20:28:31.010268 2076 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 13 20:28:31.010322 kubelet[2076]: I0213 20:28:31.010289 2076 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:31.011879 kubelet[2076]: E0213 20:28:31.011856 2076 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:31.219617 kubelet[2076]: I0213 20:28:31.219578 2076 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:31.221655 kubelet[2076]: E0213 20:28:31.221604 2076 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:31.221779 kubelet[2076]: E0213 20:28:31.221761 2076 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:31.693670 kubelet[2076]: I0213 20:28:31.693577 2076 apiserver.go:52] "Watching apiserver" Feb 13 20:28:31.704150 kubelet[2076]: I0213 20:28:31.704117 2076 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:28:32.424606 kubelet[2076]: I0213 20:28:32.424570 2076 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:32.430364 kubelet[2076]: E0213 20:28:32.430333 2076 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:32.747808 kubelet[2076]: E0213 20:28:32.747703 2076 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:33.077400 systemd[1]: Reloading requested from client PID 2356 ('systemctl') (unit session-5.scope)... Feb 13 20:28:33.077416 systemd[1]: Reloading... Feb 13 20:28:33.141376 zram_generator::config[2395]: No configuration found. Feb 13 20:28:33.221775 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:28:33.289166 systemd[1]: Reloading finished in 211 ms. Feb 13 20:28:33.325086 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:33.343799 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:28:33.344171 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:33.344228 systemd[1]: kubelet.service: Consumed 1.515s CPU time, 125.4M memory peak, 0B memory swap peak. Feb 13 20:28:33.354535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:28:33.451246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:28:33.456049 (kubelet)[2437]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:28:33.490709 kubelet[2437]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:28:33.490709 kubelet[2437]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:28:33.491248 kubelet[2437]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:28:33.491248 kubelet[2437]: I0213 20:28:33.491044 2437 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:28:33.497108 kubelet[2437]: I0213 20:28:33.497071 2437 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:28:33.497108 kubelet[2437]: I0213 20:28:33.497099 2437 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:28:33.497339 kubelet[2437]: I0213 20:28:33.497319 2437 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:28:33.498524 kubelet[2437]: I0213 20:28:33.498502 2437 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:28:33.500652 kubelet[2437]: I0213 20:28:33.500623 2437 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:28:33.504535 kubelet[2437]: E0213 20:28:33.504498 2437 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:28:33.504535 kubelet[2437]: I0213 20:28:33.504528 2437 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:28:33.506932 kubelet[2437]: I0213 20:28:33.506909 2437 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:28:33.507139 kubelet[2437]: I0213 20:28:33.507107 2437 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:28:33.507310 kubelet[2437]: I0213 20:28:33.507133 2437 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:28:33.507393 kubelet[2437]: I0213 20:28:33.507347 2437 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:28:33.507423 kubelet[2437]: I0213 20:28:33.507396 2437 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:28:33.507457 kubelet[2437]: I0213 20:28:33.507447 2437 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:28:33.507613 kubelet[2437]: I0213 20:28:33.507592 2437 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:28:33.507613 kubelet[2437]: I0213 20:28:33.507610 2437 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:28:33.507715 kubelet[2437]: I0213 20:28:33.507626 2437 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:28:33.510395 kubelet[2437]: I0213 20:28:33.507635 2437 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:28:33.511170 kubelet[2437]: I0213 20:28:33.510939 2437 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:28:33.511519 kubelet[2437]: I0213 20:28:33.511486 2437 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:28:33.514093 kubelet[2437]: I0213 20:28:33.511893 2437 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:28:33.514093 kubelet[2437]: I0213 20:28:33.511924 2437 server.go:1287] "Started kubelet" Feb 13 20:28:33.514093 kubelet[2437]: I0213 20:28:33.512947 2437 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:28:33.514093 kubelet[2437]: I0213 20:28:33.513169 2437 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:28:33.514093 kubelet[2437]: I0213 20:28:33.513213 2437 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:28:33.514093 kubelet[2437]: I0213 20:28:33.514039 2437 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:28:33.516052 kubelet[2437]: I0213 20:28:33.515936 2437 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:28:33.516180 kubelet[2437]: E0213 20:28:33.516077 2437 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:28:33.518362 kubelet[2437]: I0213 20:28:33.517140 2437 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:28:33.518362 kubelet[2437]: E0213 20:28:33.517615 2437 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:28:33.518362 kubelet[2437]: I0213 20:28:33.517644 2437 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:28:33.518362 kubelet[2437]: I0213 20:28:33.517809 2437 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:28:33.518362 kubelet[2437]: I0213 20:28:33.517928 2437 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:28:33.519197 kubelet[2437]: I0213 20:28:33.519166 2437 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:28:33.519278 kubelet[2437]: I0213 20:28:33.519257 2437 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:28:33.531597 kubelet[2437]: I0213 20:28:33.531564 2437 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:28:33.535935 kubelet[2437]: I0213 20:28:33.535880 2437 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:28:33.536748 kubelet[2437]: I0213 20:28:33.536716 2437 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:28:33.536748 kubelet[2437]: I0213 20:28:33.536741 2437 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:28:33.536907 kubelet[2437]: I0213 20:28:33.536881 2437 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:28:33.536907 kubelet[2437]: I0213 20:28:33.536897 2437 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:28:33.536964 kubelet[2437]: E0213 20:28:33.536942 2437 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:28:33.564010 kubelet[2437]: I0213 20:28:33.563978 2437 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:28:33.564010 kubelet[2437]: I0213 20:28:33.563998 2437 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:28:33.564010 kubelet[2437]: I0213 20:28:33.564017 2437 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:28:33.564196 kubelet[2437]: I0213 20:28:33.564176 2437 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:28:33.564223 kubelet[2437]: I0213 20:28:33.564191 2437 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:28:33.564223 kubelet[2437]: I0213 20:28:33.564212 2437 policy_none.go:49] "None policy: Start" Feb 13 20:28:33.564223 kubelet[2437]: I0213 20:28:33.564220 2437 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:28:33.564281 kubelet[2437]: I0213 20:28:33.564229 2437 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:28:33.564355 kubelet[2437]: I0213 20:28:33.564341 2437 state_mem.go:75] "Updated machine memory state" Feb 13 20:28:33.567805 kubelet[2437]: I0213 20:28:33.567771 2437 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:28:33.567949 kubelet[2437]: I0213 20:28:33.567929 2437 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:28:33.568179 kubelet[2437]: I0213 20:28:33.567950 2437 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:28:33.568212 kubelet[2437]: I0213 20:28:33.568188 2437 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:28:33.568932 kubelet[2437]: E0213 20:28:33.568903 2437 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:28:33.638352 kubelet[2437]: I0213 20:28:33.637669 2437 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:33.638352 kubelet[2437]: I0213 20:28:33.637799 2437 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 20:28:33.638352 kubelet[2437]: I0213 20:28:33.637832 2437 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:33.643871 kubelet[2437]: E0213 20:28:33.643839 2437 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:33.672475 kubelet[2437]: I0213 20:28:33.672443 2437 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:28:33.679009 kubelet[2437]: I0213 20:28:33.678978 2437 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 20:28:33.679092 kubelet[2437]: I0213 20:28:33.679052 2437 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 20:28:33.820479 kubelet[2437]: I0213 20:28:33.820420 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84f635b79f6f22dd358ac5ed607d1438-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"84f635b79f6f22dd358ac5ed607d1438\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:33.820479 kubelet[2437]: I0213 20:28:33.820467 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84f635b79f6f22dd358ac5ed607d1438-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"84f635b79f6f22dd358ac5ed607d1438\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:33.820665 kubelet[2437]: I0213 20:28:33.820506 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84f635b79f6f22dd358ac5ed607d1438-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"84f635b79f6f22dd358ac5ed607d1438\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:33.820665 kubelet[2437]: I0213 20:28:33.820526 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:33.820665 kubelet[2437]: I0213 20:28:33.820544 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:33.820665 kubelet[2437]: I0213 20:28:33.820562 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:28:33.820665 kubelet[2437]: I0213 20:28:33.820578 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:33.820783 kubelet[2437]: I0213 20:28:33.820593 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:33.820783 kubelet[2437]: I0213 20:28:33.820619 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:33.944567 kubelet[2437]: E0213 20:28:33.944236 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:33.944567 kubelet[2437]: E0213 20:28:33.944236 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:33.944567 kubelet[2437]: E0213 20:28:33.944390 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:34.511398 kubelet[2437]: I0213 20:28:34.511333 2437 apiserver.go:52] "Watching apiserver" Feb 13 20:28:34.518589 kubelet[2437]: I0213 20:28:34.518535 2437 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:28:34.549461 kubelet[2437]: E0213 20:28:34.549414 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:34.550094 kubelet[2437]: I0213 20:28:34.549630 2437 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:34.550094 kubelet[2437]: I0213 20:28:34.549755 2437 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:34.557708 kubelet[2437]: E0213 20:28:34.557659 2437 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 20:28:34.557868 kubelet[2437]: E0213 20:28:34.557814 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:34.557969 kubelet[2437]: E0213 20:28:34.557659 2437 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:28:34.559223 kubelet[2437]: E0213 20:28:34.558796 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:34.576468 kubelet[2437]: I0213 20:28:34.576379 2437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.575916055 podStartE2EDuration="1.575916055s" podCreationTimestamp="2025-02-13 20:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:28:34.569730405 +0000 UTC m=+1.110832529" watchObservedRunningTime="2025-02-13 20:28:34.575916055 +0000 UTC m=+1.117018179" Feb 13 20:28:34.584621 kubelet[2437]: I0213 20:28:34.584551 2437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.584535096 podStartE2EDuration="1.584535096s" podCreationTimestamp="2025-02-13 20:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:28:34.577095706 +0000 UTC m=+1.118197830" watchObservedRunningTime="2025-02-13 20:28:34.584535096 +0000 UTC m=+1.125637220" Feb 13 20:28:34.593938 kubelet[2437]: I0213 20:28:34.593847 2437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.593829452 podStartE2EDuration="2.593829452s" podCreationTimestamp="2025-02-13 20:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:28:34.584715956 +0000 UTC m=+1.125818120" watchObservedRunningTime="2025-02-13 20:28:34.593829452 +0000 UTC m=+1.134931576" Feb 13 20:28:34.896715 sudo[1580]: pam_unix(sudo:session): session closed for user root Feb 13 20:28:34.898524 sshd[1577]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:34.901772 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:52648.service: Deactivated successfully. Feb 13 20:28:34.903440 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:28:34.903610 systemd[1]: session-5.scope: Consumed 5.607s CPU time, 154.6M memory peak, 0B memory swap peak. Feb 13 20:28:34.904116 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:28:34.905035 systemd-logind[1425]: Removed session 5. Feb 13 20:28:35.551809 kubelet[2437]: E0213 20:28:35.551465 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:35.551809 kubelet[2437]: E0213 20:28:35.551657 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:35.552396 kubelet[2437]: E0213 20:28:35.552363 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:36.552705 kubelet[2437]: E0213 20:28:36.552633 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:36.553024 kubelet[2437]: E0213 20:28:36.552825 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:38.137998 kubelet[2437]: E0213 20:28:38.137960 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:38.397387 kubelet[2437]: I0213 20:28:38.397248 2437 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:28:38.397620 containerd[1443]: time="2025-02-13T20:28:38.397569199Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:28:38.397909 kubelet[2437]: I0213 20:28:38.397797 2437 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:28:39.444098 systemd[1]: Created slice kubepods-besteffort-pod43b0751b_f89b_44e4_9659_0ec2609251ad.slice - libcontainer container kubepods-besteffort-pod43b0751b_f89b_44e4_9659_0ec2609251ad.slice. Feb 13 20:28:39.454359 kubelet[2437]: I0213 20:28:39.454152 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk8n4\" (UniqueName: \"kubernetes.io/projected/43b0751b-f89b-44e4-9659-0ec2609251ad-kube-api-access-qk8n4\") pod \"kube-proxy-5kqjp\" (UID: \"43b0751b-f89b-44e4-9659-0ec2609251ad\") " pod="kube-system/kube-proxy-5kqjp" Feb 13 20:28:39.454359 kubelet[2437]: I0213 20:28:39.454192 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43b0751b-f89b-44e4-9659-0ec2609251ad-kube-proxy\") pod \"kube-proxy-5kqjp\" (UID: \"43b0751b-f89b-44e4-9659-0ec2609251ad\") " pod="kube-system/kube-proxy-5kqjp" Feb 13 20:28:39.454359 kubelet[2437]: I0213 20:28:39.454210 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7d8d9a1b-1c8a-4252-bae8-b1cf43294240-run\") pod \"kube-flannel-ds-sgxqm\" (UID: \"7d8d9a1b-1c8a-4252-bae8-b1cf43294240\") " pod="kube-flannel/kube-flannel-ds-sgxqm" Feb 13 20:28:39.454359 kubelet[2437]: I0213 20:28:39.454228 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43b0751b-f89b-44e4-9659-0ec2609251ad-xtables-lock\") pod \"kube-proxy-5kqjp\" (UID: \"43b0751b-f89b-44e4-9659-0ec2609251ad\") " pod="kube-system/kube-proxy-5kqjp" Feb 13 20:28:39.454359 kubelet[2437]: I0213 20:28:39.454243 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43b0751b-f89b-44e4-9659-0ec2609251ad-lib-modules\") pod \"kube-proxy-5kqjp\" (UID: \"43b0751b-f89b-44e4-9659-0ec2609251ad\") " pod="kube-system/kube-proxy-5kqjp" Feb 13 20:28:39.454746 kubelet[2437]: I0213 20:28:39.454259 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/7d8d9a1b-1c8a-4252-bae8-b1cf43294240-cni-plugin\") pod \"kube-flannel-ds-sgxqm\" (UID: \"7d8d9a1b-1c8a-4252-bae8-b1cf43294240\") " pod="kube-flannel/kube-flannel-ds-sgxqm" Feb 13 20:28:39.454746 kubelet[2437]: I0213 20:28:39.454274 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d8d9a1b-1c8a-4252-bae8-b1cf43294240-xtables-lock\") pod \"kube-flannel-ds-sgxqm\" (UID: \"7d8d9a1b-1c8a-4252-bae8-b1cf43294240\") " pod="kube-flannel/kube-flannel-ds-sgxqm" Feb 13 20:28:39.454746 kubelet[2437]: I0213 20:28:39.454291 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/7d8d9a1b-1c8a-4252-bae8-b1cf43294240-flannel-cfg\") pod \"kube-flannel-ds-sgxqm\" (UID: \"7d8d9a1b-1c8a-4252-bae8-b1cf43294240\") " pod="kube-flannel/kube-flannel-ds-sgxqm" Feb 13 20:28:39.454746 kubelet[2437]: I0213 20:28:39.454347 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnvjj\" (UniqueName: \"kubernetes.io/projected/7d8d9a1b-1c8a-4252-bae8-b1cf43294240-kube-api-access-qnvjj\") pod \"kube-flannel-ds-sgxqm\" (UID: \"7d8d9a1b-1c8a-4252-bae8-b1cf43294240\") " pod="kube-flannel/kube-flannel-ds-sgxqm" Feb 13 20:28:39.454746 kubelet[2437]: I0213 20:28:39.454364 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/7d8d9a1b-1c8a-4252-bae8-b1cf43294240-cni\") pod \"kube-flannel-ds-sgxqm\" (UID: \"7d8d9a1b-1c8a-4252-bae8-b1cf43294240\") " pod="kube-flannel/kube-flannel-ds-sgxqm" Feb 13 20:28:39.458318 systemd[1]: Created slice kubepods-burstable-pod7d8d9a1b_1c8a_4252_bae8_b1cf43294240.slice - libcontainer container kubepods-burstable-pod7d8d9a1b_1c8a_4252_bae8_b1cf43294240.slice. Feb 13 20:28:39.755887 kubelet[2437]: E0213 20:28:39.755459 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:39.756526 containerd[1443]: time="2025-02-13T20:28:39.756484664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5kqjp,Uid:43b0751b-f89b-44e4-9659-0ec2609251ad,Namespace:kube-system,Attempt:0,}" Feb 13 20:28:39.760291 kubelet[2437]: E0213 20:28:39.760249 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:39.761141 containerd[1443]: time="2025-02-13T20:28:39.761092332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-sgxqm,Uid:7d8d9a1b-1c8a-4252-bae8-b1cf43294240,Namespace:kube-flannel,Attempt:0,}" Feb 13 20:28:39.776813 containerd[1443]: time="2025-02-13T20:28:39.776461145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:28:39.776813 containerd[1443]: time="2025-02-13T20:28:39.776775092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:28:39.776813 containerd[1443]: time="2025-02-13T20:28:39.776787093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:39.777056 containerd[1443]: time="2025-02-13T20:28:39.776890422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:39.787964 containerd[1443]: time="2025-02-13T20:28:39.787367423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:28:39.787964 containerd[1443]: time="2025-02-13T20:28:39.787766697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:28:39.787964 containerd[1443]: time="2025-02-13T20:28:39.787780138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:39.787964 containerd[1443]: time="2025-02-13T20:28:39.787867505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:28:39.795506 systemd[1]: Started cri-containerd-7c1cec2bb7b605de97f00d58cf2d2a4d21f4aa5e43bfaac8ce2e55d5642fc460.scope - libcontainer container 7c1cec2bb7b605de97f00d58cf2d2a4d21f4aa5e43bfaac8ce2e55d5642fc460. Feb 13 20:28:39.800471 systemd[1]: Started cri-containerd-5887e94f9bd8c166d4b1a966af1dcce60dc0496ebac99778e4e044c2ba40777a.scope - libcontainer container 5887e94f9bd8c166d4b1a966af1dcce60dc0496ebac99778e4e044c2ba40777a. Feb 13 20:28:39.817223 containerd[1443]: time="2025-02-13T20:28:39.817179733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5kqjp,Uid:43b0751b-f89b-44e4-9659-0ec2609251ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c1cec2bb7b605de97f00d58cf2d2a4d21f4aa5e43bfaac8ce2e55d5642fc460\"" Feb 13 20:28:39.819368 kubelet[2437]: E0213 20:28:39.818876 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:39.822264 containerd[1443]: time="2025-02-13T20:28:39.822224317Z" level=info msg="CreateContainer within sandbox \"7c1cec2bb7b605de97f00d58cf2d2a4d21f4aa5e43bfaac8ce2e55d5642fc460\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:28:39.832255 containerd[1443]: time="2025-02-13T20:28:39.832226679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-sgxqm,Uid:7d8d9a1b-1c8a-4252-bae8-b1cf43294240,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"5887e94f9bd8c166d4b1a966af1dcce60dc0496ebac99778e4e044c2ba40777a\"" Feb 13 20:28:39.833226 kubelet[2437]: E0213 20:28:39.833204 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:39.834561 containerd[1443]: time="2025-02-13T20:28:39.834410743Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:28:39.840176 containerd[1443]: time="2025-02-13T20:28:39.840116063Z" level=info msg="CreateContainer within sandbox \"7c1cec2bb7b605de97f00d58cf2d2a4d21f4aa5e43bfaac8ce2e55d5642fc460\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8bc27b0e16a669e1fe4cec03d239322935a439d3803285699af3a6cea4e27f72\"" Feb 13 20:28:39.840881 containerd[1443]: time="2025-02-13T20:28:39.840646628Z" level=info msg="StartContainer for \"8bc27b0e16a669e1fe4cec03d239322935a439d3803285699af3a6cea4e27f72\"" Feb 13 20:28:39.862482 systemd[1]: Started cri-containerd-8bc27b0e16a669e1fe4cec03d239322935a439d3803285699af3a6cea4e27f72.scope - libcontainer container 8bc27b0e16a669e1fe4cec03d239322935a439d3803285699af3a6cea4e27f72. Feb 13 20:28:39.884651 containerd[1443]: time="2025-02-13T20:28:39.884285661Z" level=info msg="StartContainer for \"8bc27b0e16a669e1fe4cec03d239322935a439d3803285699af3a6cea4e27f72\" returns successfully" Feb 13 20:28:40.561053 kubelet[2437]: E0213 20:28:40.560993 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:41.299959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount634972456.mount: Deactivated successfully. Feb 13 20:28:41.326245 containerd[1443]: time="2025-02-13T20:28:41.326195234Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:41.326682 containerd[1443]: time="2025-02-13T20:28:41.326643988Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 20:28:41.327516 containerd[1443]: time="2025-02-13T20:28:41.327490731Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:41.329560 containerd[1443]: time="2025-02-13T20:28:41.329529245Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:28:41.330560 containerd[1443]: time="2025-02-13T20:28:41.330524880Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.496082615s" Feb 13 20:28:41.330613 containerd[1443]: time="2025-02-13T20:28:41.330560003Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 20:28:41.334147 containerd[1443]: time="2025-02-13T20:28:41.333998743Z" level=info msg="CreateContainer within sandbox \"5887e94f9bd8c166d4b1a966af1dcce60dc0496ebac99778e4e044c2ba40777a\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 20:28:41.343079 containerd[1443]: time="2025-02-13T20:28:41.343024184Z" level=info msg="CreateContainer within sandbox \"5887e94f9bd8c166d4b1a966af1dcce60dc0496ebac99778e4e044c2ba40777a\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"ca49dfd9d32fade366744f5366bf740b9691dcd34f3f46e2df1ab948efbb776c\"" Feb 13 20:28:41.343175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount618350423.mount: Deactivated successfully. Feb 13 20:28:41.344144 containerd[1443]: time="2025-02-13T20:28:41.344080064Z" level=info msg="StartContainer for \"ca49dfd9d32fade366744f5366bf740b9691dcd34f3f46e2df1ab948efbb776c\"" Feb 13 20:28:41.376446 systemd[1]: Started cri-containerd-ca49dfd9d32fade366744f5366bf740b9691dcd34f3f46e2df1ab948efbb776c.scope - libcontainer container ca49dfd9d32fade366744f5366bf740b9691dcd34f3f46e2df1ab948efbb776c. Feb 13 20:28:41.394674 containerd[1443]: time="2025-02-13T20:28:41.394626279Z" level=info msg="StartContainer for \"ca49dfd9d32fade366744f5366bf740b9691dcd34f3f46e2df1ab948efbb776c\" returns successfully" Feb 13 20:28:41.401131 systemd[1]: cri-containerd-ca49dfd9d32fade366744f5366bf740b9691dcd34f3f46e2df1ab948efbb776c.scope: Deactivated successfully. Feb 13 20:28:41.438828 containerd[1443]: time="2025-02-13T20:28:41.438723367Z" level=info msg="shim disconnected" id=ca49dfd9d32fade366744f5366bf740b9691dcd34f3f46e2df1ab948efbb776c namespace=k8s.io Feb 13 20:28:41.438828 containerd[1443]: time="2025-02-13T20:28:41.438823534Z" level=warning msg="cleaning up after shim disconnected" id=ca49dfd9d32fade366744f5366bf740b9691dcd34f3f46e2df1ab948efbb776c namespace=k8s.io Feb 13 20:28:41.439119 containerd[1443]: time="2025-02-13T20:28:41.438835455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:28:41.563148 kubelet[2437]: E0213 20:28:41.563125 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:41.564348 containerd[1443]: time="2025-02-13T20:28:41.564257522Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 20:28:41.575076 kubelet[2437]: I0213 20:28:41.574945 2437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5kqjp" podStartSLOduration=2.574927807 podStartE2EDuration="2.574927807s" podCreationTimestamp="2025-02-13 20:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:28:40.570741764 +0000 UTC m=+7.111843888" watchObservedRunningTime="2025-02-13 20:28:41.574927807 +0000 UTC m=+8.116029931" Feb 13 20:28:42.673118 containerd[1443]: time="2025-02-13T20:28:42.673058321Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:28:42.673118 containerd[1443]: time="2025-02-13T20:28:42.673111845Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=11054" Feb 13 20:28:42.673515 kubelet[2437]: E0213 20:28:42.673320 2437 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:28:42.673515 kubelet[2437]: E0213 20:28:42.673365 2437 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:28:42.674408 kubelet[2437]: E0213 20:28:42.673585 2437 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnvjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-sgxqm_kube-flannel(7d8d9a1b-1c8a-4252-bae8-b1cf43294240): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:28:42.675713 kubelet[2437]: E0213 20:28:42.675657 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:28:43.566181 kubelet[2437]: E0213 20:28:43.566071 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:43.567006 kubelet[2437]: E0213 20:28:43.566931 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:28:43.786150 kubelet[2437]: E0213 20:28:43.786103 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:44.568124 kubelet[2437]: E0213 20:28:44.568084 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:45.570347 kubelet[2437]: E0213 20:28:45.570317 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:46.105144 update_engine[1431]: I20250213 20:28:46.105065 1431 update_attempter.cc:509] Updating boot flags... Feb 13 20:28:46.126461 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2823) Feb 13 20:28:46.157319 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2824) Feb 13 20:28:46.184341 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2824) Feb 13 20:28:46.418984 kubelet[2437]: E0213 20:28:46.418130 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:48.145530 kubelet[2437]: E0213 20:28:48.145476 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:48.574123 kubelet[2437]: E0213 20:28:48.573715 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:55.538236 kubelet[2437]: E0213 20:28:55.538131 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:28:55.539577 containerd[1443]: time="2025-02-13T20:28:55.539540185Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 20:28:56.656343 containerd[1443]: time="2025-02-13T20:28:56.656215155Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:28:56.656343 containerd[1443]: time="2025-02-13T20:28:56.656330439Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=11054" Feb 13 20:28:56.656715 kubelet[2437]: E0213 20:28:56.656460 2437 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:28:56.656715 kubelet[2437]: E0213 20:28:56.656513 2437 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:28:56.656935 kubelet[2437]: E0213 20:28:56.656606 2437 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnvjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-sgxqm_kube-flannel(7d8d9a1b-1c8a-4252-bae8-b1cf43294240): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:28:56.657759 kubelet[2437]: E0213 20:28:56.657723 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:28:59.787727 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:59062.service - OpenSSH per-connection server daemon (10.0.0.1:59062). Feb 13 20:28:59.825237 sshd[2834]: Accepted publickey for core from 10.0.0.1 port 59062 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:28:59.826528 sshd[2834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:28:59.830468 systemd-logind[1425]: New session 6 of user core. Feb 13 20:28:59.838459 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:28:59.951505 sshd[2834]: pam_unix(sshd:session): session closed for user core Feb 13 20:28:59.954619 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:59062.service: Deactivated successfully. Feb 13 20:28:59.956197 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:28:59.958924 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:28:59.959728 systemd-logind[1425]: Removed session 6. Feb 13 20:29:04.961622 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:47012.service - OpenSSH per-connection server daemon (10.0.0.1:47012). Feb 13 20:29:04.999045 sshd[2849]: Accepted publickey for core from 10.0.0.1 port 47012 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:05.000343 sshd[2849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:05.004061 systemd-logind[1425]: New session 7 of user core. Feb 13 20:29:05.013493 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:29:05.118462 sshd[2849]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:05.121898 systemd[1]: sshd@6-10.0.0.6:22-10.0.0.1:47012.service: Deactivated successfully. Feb 13 20:29:05.123901 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:29:05.124665 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:29:05.125606 systemd-logind[1425]: Removed session 7. Feb 13 20:29:07.538785 kubelet[2437]: E0213 20:29:07.538738 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:29:07.539745 kubelet[2437]: E0213 20:29:07.539286 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:29:10.132654 systemd[1]: Started sshd@7-10.0.0.6:22-10.0.0.1:47014.service - OpenSSH per-connection server daemon (10.0.0.1:47014). Feb 13 20:29:10.173835 sshd[2868]: Accepted publickey for core from 10.0.0.1 port 47014 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:10.175057 sshd[2868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:10.178723 systemd-logind[1425]: New session 8 of user core. Feb 13 20:29:10.187471 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:29:10.293113 sshd[2868]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:10.295703 systemd[1]: sshd@7-10.0.0.6:22-10.0.0.1:47014.service: Deactivated successfully. Feb 13 20:29:10.297176 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:29:10.298422 systemd-logind[1425]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:29:10.299408 systemd-logind[1425]: Removed session 8. Feb 13 20:29:15.307791 systemd[1]: Started sshd@8-10.0.0.6:22-10.0.0.1:45890.service - OpenSSH per-connection server daemon (10.0.0.1:45890). Feb 13 20:29:15.345208 sshd[2883]: Accepted publickey for core from 10.0.0.1 port 45890 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:15.346378 sshd[2883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:15.351367 systemd-logind[1425]: New session 9 of user core. Feb 13 20:29:15.366522 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:29:15.472961 sshd[2883]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:15.475418 systemd[1]: sshd@8-10.0.0.6:22-10.0.0.1:45890.service: Deactivated successfully. Feb 13 20:29:15.477688 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:29:15.478869 systemd-logind[1425]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:29:15.479633 systemd-logind[1425]: Removed session 9. Feb 13 20:29:20.483677 systemd[1]: Started sshd@9-10.0.0.6:22-10.0.0.1:45896.service - OpenSSH per-connection server daemon (10.0.0.1:45896). Feb 13 20:29:20.521074 sshd[2898]: Accepted publickey for core from 10.0.0.1 port 45896 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:20.522280 sshd[2898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:20.526181 systemd-logind[1425]: New session 10 of user core. Feb 13 20:29:20.538492 kubelet[2437]: E0213 20:29:20.538382 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:29:20.538444 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:29:20.541318 containerd[1443]: time="2025-02-13T20:29:20.541246419Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 20:29:20.649448 sshd[2898]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:20.653202 systemd[1]: sshd@9-10.0.0.6:22-10.0.0.1:45896.service: Deactivated successfully. Feb 13 20:29:20.654838 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:29:20.656734 systemd-logind[1425]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:29:20.657581 systemd-logind[1425]: Removed session 10. Feb 13 20:29:21.658450 containerd[1443]: time="2025-02-13T20:29:21.658390020Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:29:21.658818 containerd[1443]: time="2025-02-13T20:29:21.658477021Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=11054" Feb 13 20:29:21.658849 kubelet[2437]: E0213 20:29:21.658653 2437 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:29:21.658849 kubelet[2437]: E0213 20:29:21.658701 2437 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:29:21.659067 kubelet[2437]: E0213 20:29:21.658841 2437 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnvjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-sgxqm_kube-flannel(7d8d9a1b-1c8a-4252-bae8-b1cf43294240): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:29:21.659996 kubelet[2437]: E0213 20:29:21.659963 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:29:25.663723 systemd[1]: Started sshd@10-10.0.0.6:22-10.0.0.1:45328.service - OpenSSH per-connection server daemon (10.0.0.1:45328). Feb 13 20:29:25.700960 sshd[2914]: Accepted publickey for core from 10.0.0.1 port 45328 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:25.702132 sshd[2914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:25.705660 systemd-logind[1425]: New session 11 of user core. Feb 13 20:29:25.715428 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:29:25.818890 sshd[2914]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:25.822636 systemd[1]: sshd@10-10.0.0.6:22-10.0.0.1:45328.service: Deactivated successfully. Feb 13 20:29:25.824849 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:29:25.825463 systemd-logind[1425]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:29:25.826662 systemd-logind[1425]: Removed session 11. Feb 13 20:29:30.832820 systemd[1]: Started sshd@11-10.0.0.6:22-10.0.0.1:45334.service - OpenSSH per-connection server daemon (10.0.0.1:45334). Feb 13 20:29:30.870862 sshd[2929]: Accepted publickey for core from 10.0.0.1 port 45334 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:30.871993 sshd[2929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:30.875590 systemd-logind[1425]: New session 12 of user core. Feb 13 20:29:30.881484 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:29:30.985241 sshd[2929]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:30.988216 systemd[1]: sshd@11-10.0.0.6:22-10.0.0.1:45334.service: Deactivated successfully. Feb 13 20:29:30.989914 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:29:30.990549 systemd-logind[1425]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:29:30.991352 systemd-logind[1425]: Removed session 12. Feb 13 20:29:35.995774 systemd[1]: Started sshd@12-10.0.0.6:22-10.0.0.1:58970.service - OpenSSH per-connection server daemon (10.0.0.1:58970). Feb 13 20:29:36.033137 sshd[2946]: Accepted publickey for core from 10.0.0.1 port 58970 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:36.034526 sshd[2946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:36.038998 systemd-logind[1425]: New session 13 of user core. Feb 13 20:29:36.045538 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:29:36.150244 sshd[2946]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:36.155432 systemd[1]: sshd@12-10.0.0.6:22-10.0.0.1:58970.service: Deactivated successfully. Feb 13 20:29:36.157028 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:29:36.157785 systemd-logind[1425]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:29:36.158669 systemd-logind[1425]: Removed session 13. Feb 13 20:29:36.537455 kubelet[2437]: E0213 20:29:36.537412 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:29:36.539176 kubelet[2437]: E0213 20:29:36.538950 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:29:41.160691 systemd[1]: Started sshd@13-10.0.0.6:22-10.0.0.1:58980.service - OpenSSH per-connection server daemon (10.0.0.1:58980). Feb 13 20:29:41.197662 sshd[2964]: Accepted publickey for core from 10.0.0.1 port 58980 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:41.198868 sshd[2964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:41.202303 systemd-logind[1425]: New session 14 of user core. Feb 13 20:29:41.216493 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:29:41.321355 sshd[2964]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:41.324292 systemd[1]: sshd@13-10.0.0.6:22-10.0.0.1:58980.service: Deactivated successfully. Feb 13 20:29:41.326493 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:29:41.327629 systemd-logind[1425]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:29:41.329176 systemd-logind[1425]: Removed session 14. Feb 13 20:29:46.331724 systemd[1]: Started sshd@14-10.0.0.6:22-10.0.0.1:60186.service - OpenSSH per-connection server daemon (10.0.0.1:60186). Feb 13 20:29:46.368820 sshd[2979]: Accepted publickey for core from 10.0.0.1 port 60186 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:46.369928 sshd[2979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:46.373487 systemd-logind[1425]: New session 15 of user core. Feb 13 20:29:46.384434 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:29:46.488216 sshd[2979]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:46.491148 systemd[1]: sshd@14-10.0.0.6:22-10.0.0.1:60186.service: Deactivated successfully. Feb 13 20:29:46.492649 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:29:46.493232 systemd-logind[1425]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:29:46.493965 systemd-logind[1425]: Removed session 15. Feb 13 20:29:50.538308 kubelet[2437]: E0213 20:29:50.538255 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:29:50.538944 kubelet[2437]: E0213 20:29:50.538900 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:29:51.498709 systemd[1]: Started sshd@15-10.0.0.6:22-10.0.0.1:60190.service - OpenSSH per-connection server daemon (10.0.0.1:60190). Feb 13 20:29:51.535951 sshd[2995]: Accepted publickey for core from 10.0.0.1 port 60190 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:51.537160 sshd[2995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:51.541010 systemd-logind[1425]: New session 16 of user core. Feb 13 20:29:51.552465 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:29:51.658107 sshd[2995]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:51.660592 systemd[1]: sshd@15-10.0.0.6:22-10.0.0.1:60190.service: Deactivated successfully. Feb 13 20:29:51.662175 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:29:51.663411 systemd-logind[1425]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:29:51.664276 systemd-logind[1425]: Removed session 16. Feb 13 20:29:56.668633 systemd[1]: Started sshd@16-10.0.0.6:22-10.0.0.1:56194.service - OpenSSH per-connection server daemon (10.0.0.1:56194). Feb 13 20:29:56.706231 sshd[3011]: Accepted publickey for core from 10.0.0.1 port 56194 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:29:56.707519 sshd[3011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:56.710826 systemd-logind[1425]: New session 17 of user core. Feb 13 20:29:56.718436 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:29:56.820213 sshd[3011]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:56.823621 systemd[1]: sshd@16-10.0.0.6:22-10.0.0.1:56194.service: Deactivated successfully. Feb 13 20:29:56.825763 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:29:56.826480 systemd-logind[1425]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:29:56.827294 systemd-logind[1425]: Removed session 17. Feb 13 20:29:58.538063 kubelet[2437]: E0213 20:29:58.538030 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:30:01.830686 systemd[1]: Started sshd@17-10.0.0.6:22-10.0.0.1:56198.service - OpenSSH per-connection server daemon (10.0.0.1:56198). Feb 13 20:30:01.868024 sshd[3027]: Accepted publickey for core from 10.0.0.1 port 56198 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:01.869225 sshd[3027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:01.872894 systemd-logind[1425]: New session 18 of user core. Feb 13 20:30:01.889455 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:30:01.994154 sshd[3027]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:01.997137 systemd[1]: sshd@17-10.0.0.6:22-10.0.0.1:56198.service: Deactivated successfully. Feb 13 20:30:01.998752 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:30:01.999326 systemd-logind[1425]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:30:02.000250 systemd-logind[1425]: Removed session 18. Feb 13 20:30:02.537635 kubelet[2437]: E0213 20:30:02.537600 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:30:05.538113 kubelet[2437]: E0213 20:30:05.538078 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:30:05.540059 containerd[1443]: time="2025-02-13T20:30:05.540005753Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 20:30:06.669016 containerd[1443]: time="2025-02-13T20:30:06.668963161Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:30:06.669546 containerd[1443]: time="2025-02-13T20:30:06.669013042Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=11054" Feb 13 20:30:06.669577 kubelet[2437]: E0213 20:30:06.669132 2437 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:30:06.669577 kubelet[2437]: E0213 20:30:06.669182 2437 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:30:06.669873 kubelet[2437]: E0213 20:30:06.669268 2437 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnvjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-sgxqm_kube-flannel(7d8d9a1b-1c8a-4252-bae8-b1cf43294240): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:30:06.670476 kubelet[2437]: E0213 20:30:06.670440 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:30:07.009810 systemd[1]: Started sshd@18-10.0.0.6:22-10.0.0.1:59372.service - OpenSSH per-connection server daemon (10.0.0.1:59372). Feb 13 20:30:07.047120 sshd[3043]: Accepted publickey for core from 10.0.0.1 port 59372 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:07.048604 sshd[3043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:07.052094 systemd-logind[1425]: New session 19 of user core. Feb 13 20:30:07.058421 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:30:07.164290 sshd[3043]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:07.167376 systemd[1]: sshd@18-10.0.0.6:22-10.0.0.1:59372.service: Deactivated successfully. Feb 13 20:30:07.168933 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:30:07.170350 systemd-logind[1425]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:30:07.171549 systemd-logind[1425]: Removed session 19. Feb 13 20:30:07.537534 kubelet[2437]: E0213 20:30:07.537444 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:30:10.538022 kubelet[2437]: E0213 20:30:10.537955 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:30:12.175756 systemd[1]: Started sshd@19-10.0.0.6:22-10.0.0.1:59380.service - OpenSSH per-connection server daemon (10.0.0.1:59380). Feb 13 20:30:12.213169 sshd[3061]: Accepted publickey for core from 10.0.0.1 port 59380 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:12.214389 sshd[3061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:12.217927 systemd-logind[1425]: New session 20 of user core. Feb 13 20:30:12.225526 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:30:12.330522 sshd[3061]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:12.333738 systemd[1]: sshd@19-10.0.0.6:22-10.0.0.1:59380.service: Deactivated successfully. Feb 13 20:30:12.335977 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:30:12.336640 systemd-logind[1425]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:30:12.337665 systemd-logind[1425]: Removed session 20. Feb 13 20:30:17.340622 systemd[1]: Started sshd@20-10.0.0.6:22-10.0.0.1:34146.service - OpenSSH per-connection server daemon (10.0.0.1:34146). Feb 13 20:30:17.378324 sshd[3076]: Accepted publickey for core from 10.0.0.1 port 34146 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:17.379476 sshd[3076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:17.383260 systemd-logind[1425]: New session 21 of user core. Feb 13 20:30:17.390484 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:30:17.492525 sshd[3076]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:17.495675 systemd[1]: sshd@20-10.0.0.6:22-10.0.0.1:34146.service: Deactivated successfully. Feb 13 20:30:17.497345 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:30:17.497941 systemd-logind[1425]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:30:17.498747 systemd-logind[1425]: Removed session 21. Feb 13 20:30:17.537486 kubelet[2437]: E0213 20:30:17.537419 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:30:17.538868 kubelet[2437]: E0213 20:30:17.538812 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:30:22.502880 systemd[1]: Started sshd@21-10.0.0.6:22-10.0.0.1:33642.service - OpenSSH per-connection server daemon (10.0.0.1:33642). Feb 13 20:30:22.540175 sshd[3092]: Accepted publickey for core from 10.0.0.1 port 33642 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:22.541439 sshd[3092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:22.545658 systemd-logind[1425]: New session 22 of user core. Feb 13 20:30:22.556442 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:30:22.663010 sshd[3092]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:22.665830 systemd-logind[1425]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:30:22.666148 systemd[1]: sshd@21-10.0.0.6:22-10.0.0.1:33642.service: Deactivated successfully. Feb 13 20:30:22.667892 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:30:22.669339 systemd-logind[1425]: Removed session 22. Feb 13 20:30:27.673763 systemd[1]: Started sshd@22-10.0.0.6:22-10.0.0.1:33644.service - OpenSSH per-connection server daemon (10.0.0.1:33644). Feb 13 20:30:27.710880 sshd[3108]: Accepted publickey for core from 10.0.0.1 port 33644 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:27.712059 sshd[3108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:27.715956 systemd-logind[1425]: New session 23 of user core. Feb 13 20:30:27.725494 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:30:27.829162 sshd[3108]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:27.832680 systemd[1]: sshd@22-10.0.0.6:22-10.0.0.1:33644.service: Deactivated successfully. Feb 13 20:30:27.834291 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:30:27.835857 systemd-logind[1425]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:30:27.836701 systemd-logind[1425]: Removed session 23. Feb 13 20:30:28.538450 kubelet[2437]: E0213 20:30:28.538387 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:30:28.539153 kubelet[2437]: E0213 20:30:28.539119 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:30:32.840031 systemd[1]: Started sshd@23-10.0.0.6:22-10.0.0.1:58346.service - OpenSSH per-connection server daemon (10.0.0.1:58346). Feb 13 20:30:32.877273 sshd[3124]: Accepted publickey for core from 10.0.0.1 port 58346 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:32.878460 sshd[3124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:32.881913 systemd-logind[1425]: New session 24 of user core. Feb 13 20:30:32.889433 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:30:32.997545 sshd[3124]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:33.000925 systemd[1]: sshd@23-10.0.0.6:22-10.0.0.1:58346.service: Deactivated successfully. Feb 13 20:30:33.002807 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:30:33.003516 systemd-logind[1425]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:30:33.004367 systemd-logind[1425]: Removed session 24. Feb 13 20:30:33.553015 kubelet[2437]: E0213 20:30:33.552963 2437 kubelet_node_status.go:461] "Node not becoming ready in time after startup" Feb 13 20:30:33.589305 kubelet[2437]: E0213 20:30:33.589231 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:30:38.007800 systemd[1]: Started sshd@24-10.0.0.6:22-10.0.0.1:58362.service - OpenSSH per-connection server daemon (10.0.0.1:58362). Feb 13 20:30:38.045939 sshd[3142]: Accepted publickey for core from 10.0.0.1 port 58362 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:38.047050 sshd[3142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:38.051061 systemd-logind[1425]: New session 25 of user core. Feb 13 20:30:38.061482 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:30:38.170725 sshd[3142]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:38.173784 systemd[1]: sshd@24-10.0.0.6:22-10.0.0.1:58362.service: Deactivated successfully. Feb 13 20:30:38.175812 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:30:38.176569 systemd-logind[1425]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:30:38.177524 systemd-logind[1425]: Removed session 25. Feb 13 20:30:38.591034 kubelet[2437]: E0213 20:30:38.590988 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:30:41.540483 kubelet[2437]: E0213 20:30:41.540155 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:30:41.540818 kubelet[2437]: E0213 20:30:41.540740 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:30:43.182570 systemd[1]: Started sshd@25-10.0.0.6:22-10.0.0.1:45412.service - OpenSSH per-connection server daemon (10.0.0.1:45412). Feb 13 20:30:43.219990 sshd[3160]: Accepted publickey for core from 10.0.0.1 port 45412 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:43.221367 sshd[3160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:43.224881 systemd-logind[1425]: New session 26 of user core. Feb 13 20:30:43.244529 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:30:43.348737 sshd[3160]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:43.351506 systemd[1]: sshd@25-10.0.0.6:22-10.0.0.1:45412.service: Deactivated successfully. Feb 13 20:30:43.352993 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:30:43.354098 systemd-logind[1425]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:30:43.355055 systemd-logind[1425]: Removed session 26. Feb 13 20:30:43.592099 kubelet[2437]: E0213 20:30:43.592063 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:30:48.359851 systemd[1]: Started sshd@26-10.0.0.6:22-10.0.0.1:45416.service - OpenSSH per-connection server daemon (10.0.0.1:45416). Feb 13 20:30:48.397098 sshd[3175]: Accepted publickey for core from 10.0.0.1 port 45416 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:48.398249 sshd[3175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:48.402285 systemd-logind[1425]: New session 27 of user core. Feb 13 20:30:48.412465 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:30:48.515989 sshd[3175]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:48.518240 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:30:48.518861 systemd[1]: sshd@26-10.0.0.6:22-10.0.0.1:45416.service: Deactivated successfully. Feb 13 20:30:48.521283 systemd-logind[1425]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:30:48.522279 systemd-logind[1425]: Removed session 27. Feb 13 20:30:48.593599 kubelet[2437]: E0213 20:30:48.593559 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:30:53.526784 systemd[1]: Started sshd@27-10.0.0.6:22-10.0.0.1:34600.service - OpenSSH per-connection server daemon (10.0.0.1:34600). Feb 13 20:30:53.564969 sshd[3190]: Accepted publickey for core from 10.0.0.1 port 34600 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:53.566107 sshd[3190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:53.570285 systemd-logind[1425]: New session 28 of user core. Feb 13 20:30:53.574447 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:30:53.595021 kubelet[2437]: E0213 20:30:53.594985 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:30:53.680576 sshd[3190]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:53.683625 systemd[1]: sshd@27-10.0.0.6:22-10.0.0.1:34600.service: Deactivated successfully. Feb 13 20:30:53.685148 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:30:53.686323 systemd-logind[1425]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:30:53.687055 systemd-logind[1425]: Removed session 28. Feb 13 20:30:56.538114 kubelet[2437]: E0213 20:30:56.538072 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:30:56.538690 kubelet[2437]: E0213 20:30:56.538661 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:30:58.595692 kubelet[2437]: E0213 20:30:58.595642 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:30:58.694953 systemd[1]: Started sshd@28-10.0.0.6:22-10.0.0.1:34608.service - OpenSSH per-connection server daemon (10.0.0.1:34608). Feb 13 20:30:58.732081 sshd[3205]: Accepted publickey for core from 10.0.0.1 port 34608 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:30:58.733351 sshd[3205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:30:58.736657 systemd-logind[1425]: New session 29 of user core. Feb 13 20:30:58.753488 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:30:58.859000 sshd[3205]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:58.862422 systemd[1]: sshd@28-10.0.0.6:22-10.0.0.1:34608.service: Deactivated successfully. Feb 13 20:30:58.864099 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:30:58.865524 systemd-logind[1425]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:30:58.866407 systemd-logind[1425]: Removed session 29. Feb 13 20:31:03.538904 kubelet[2437]: E0213 20:31:03.538795 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:31:03.596663 kubelet[2437]: E0213 20:31:03.596596 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:03.869966 systemd[1]: Started sshd@29-10.0.0.6:22-10.0.0.1:48396.service - OpenSSH per-connection server daemon (10.0.0.1:48396). Feb 13 20:31:03.912833 sshd[3222]: Accepted publickey for core from 10.0.0.1 port 48396 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:03.914236 sshd[3222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:03.919791 systemd-logind[1425]: New session 30 of user core. Feb 13 20:31:03.931507 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 20:31:04.035545 sshd[3222]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:04.038546 systemd[1]: sshd@29-10.0.0.6:22-10.0.0.1:48396.service: Deactivated successfully. Feb 13 20:31:04.040109 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 20:31:04.041520 systemd-logind[1425]: Session 30 logged out. Waiting for processes to exit. Feb 13 20:31:04.042344 systemd-logind[1425]: Removed session 30. Feb 13 20:31:04.537594 kubelet[2437]: E0213 20:31:04.537562 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:31:08.537957 kubelet[2437]: E0213 20:31:08.537903 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:31:08.538641 kubelet[2437]: E0213 20:31:08.538593 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:31:08.597379 kubelet[2437]: E0213 20:31:08.597340 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:09.053853 systemd[1]: Started sshd@30-10.0.0.6:22-10.0.0.1:48404.service - OpenSSH per-connection server daemon (10.0.0.1:48404). Feb 13 20:31:09.091204 sshd[3239]: Accepted publickey for core from 10.0.0.1 port 48404 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:09.092458 sshd[3239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:09.096318 systemd-logind[1425]: New session 31 of user core. Feb 13 20:31:09.107429 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 20:31:09.214612 sshd[3239]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:09.217679 systemd[1]: sshd@30-10.0.0.6:22-10.0.0.1:48404.service: Deactivated successfully. Feb 13 20:31:09.219174 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 20:31:09.219757 systemd-logind[1425]: Session 31 logged out. Waiting for processes to exit. Feb 13 20:31:09.220626 systemd-logind[1425]: Removed session 31. Feb 13 20:31:13.598005 kubelet[2437]: E0213 20:31:13.597968 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:14.225730 systemd[1]: Started sshd@31-10.0.0.6:22-10.0.0.1:46470.service - OpenSSH per-connection server daemon (10.0.0.1:46470). Feb 13 20:31:14.262905 sshd[3258]: Accepted publickey for core from 10.0.0.1 port 46470 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:14.264096 sshd[3258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:14.267470 systemd-logind[1425]: New session 32 of user core. Feb 13 20:31:14.284489 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 20:31:14.389219 sshd[3258]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:14.392213 systemd[1]: sshd@31-10.0.0.6:22-10.0.0.1:46470.service: Deactivated successfully. Feb 13 20:31:14.394792 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 20:31:14.395357 systemd-logind[1425]: Session 32 logged out. Waiting for processes to exit. Feb 13 20:31:14.396122 systemd-logind[1425]: Removed session 32. Feb 13 20:31:17.537498 kubelet[2437]: E0213 20:31:17.537397 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:31:18.599312 kubelet[2437]: E0213 20:31:18.599242 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:19.403879 systemd[1]: Started sshd@32-10.0.0.6:22-10.0.0.1:46476.service - OpenSSH per-connection server daemon (10.0.0.1:46476). Feb 13 20:31:19.440682 sshd[3274]: Accepted publickey for core from 10.0.0.1 port 46476 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:19.441825 sshd[3274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:19.445740 systemd-logind[1425]: New session 33 of user core. Feb 13 20:31:19.454444 systemd[1]: Started session-33.scope - Session 33 of User core. Feb 13 20:31:19.537463 kubelet[2437]: E0213 20:31:19.537428 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:31:19.538287 kubelet[2437]: E0213 20:31:19.538256 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:31:19.561410 sshd[3274]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:19.565191 systemd[1]: sshd@32-10.0.0.6:22-10.0.0.1:46476.service: Deactivated successfully. Feb 13 20:31:19.566918 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 20:31:19.567576 systemd-logind[1425]: Session 33 logged out. Waiting for processes to exit. Feb 13 20:31:19.568394 systemd-logind[1425]: Removed session 33. Feb 13 20:31:23.600930 kubelet[2437]: E0213 20:31:23.600889 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:24.571751 systemd[1]: Started sshd@33-10.0.0.6:22-10.0.0.1:52174.service - OpenSSH per-connection server daemon (10.0.0.1:52174). Feb 13 20:31:24.609563 sshd[3289]: Accepted publickey for core from 10.0.0.1 port 52174 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:24.610725 sshd[3289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:24.614579 systemd-logind[1425]: New session 34 of user core. Feb 13 20:31:24.624451 systemd[1]: Started session-34.scope - Session 34 of User core. Feb 13 20:31:24.729168 sshd[3289]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:24.732426 systemd[1]: sshd@33-10.0.0.6:22-10.0.0.1:52174.service: Deactivated successfully. Feb 13 20:31:24.733990 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 20:31:24.735167 systemd-logind[1425]: Session 34 logged out. Waiting for processes to exit. Feb 13 20:31:24.736010 systemd-logind[1425]: Removed session 34. Feb 13 20:31:28.601769 kubelet[2437]: E0213 20:31:28.601729 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:29.739840 systemd[1]: Started sshd@34-10.0.0.6:22-10.0.0.1:52190.service - OpenSSH per-connection server daemon (10.0.0.1:52190). Feb 13 20:31:29.777069 sshd[3304]: Accepted publickey for core from 10.0.0.1 port 52190 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:29.778184 sshd[3304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:29.781361 systemd-logind[1425]: New session 35 of user core. Feb 13 20:31:29.787437 systemd[1]: Started session-35.scope - Session 35 of User core. Feb 13 20:31:29.892747 sshd[3304]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:29.896079 systemd[1]: sshd@34-10.0.0.6:22-10.0.0.1:52190.service: Deactivated successfully. Feb 13 20:31:29.897814 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 20:31:29.898419 systemd-logind[1425]: Session 35 logged out. Waiting for processes to exit. Feb 13 20:31:29.899277 systemd-logind[1425]: Removed session 35. Feb 13 20:31:33.602444 kubelet[2437]: E0213 20:31:33.602403 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:34.537524 kubelet[2437]: E0213 20:31:34.537484 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:31:34.538437 containerd[1443]: time="2025-02-13T20:31:34.538393417Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 20:31:34.903705 systemd[1]: Started sshd@35-10.0.0.6:22-10.0.0.1:51940.service - OpenSSH per-connection server daemon (10.0.0.1:51940). Feb 13 20:31:34.941917 sshd[3321]: Accepted publickey for core from 10.0.0.1 port 51940 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:34.943159 sshd[3321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:34.946418 systemd-logind[1425]: New session 36 of user core. Feb 13 20:31:34.956429 systemd[1]: Started session-36.scope - Session 36 of User core. Feb 13 20:31:35.060420 sshd[3321]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:35.063479 systemd[1]: sshd@35-10.0.0.6:22-10.0.0.1:51940.service: Deactivated successfully. Feb 13 20:31:35.065972 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 20:31:35.066891 systemd-logind[1425]: Session 36 logged out. Waiting for processes to exit. Feb 13 20:31:35.067898 systemd-logind[1425]: Removed session 36. Feb 13 20:31:35.712270 containerd[1443]: time="2025-02-13T20:31:35.712137921Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:31:35.712270 containerd[1443]: time="2025-02-13T20:31:35.712224002Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=11054" Feb 13 20:31:35.713155 kubelet[2437]: E0213 20:31:35.712639 2437 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:31:35.713155 kubelet[2437]: E0213 20:31:35.712684 2437 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:31:35.713423 kubelet[2437]: E0213 20:31:35.712765 2437 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnvjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-sgxqm_kube-flannel(7d8d9a1b-1c8a-4252-bae8-b1cf43294240): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:31:35.714165 kubelet[2437]: E0213 20:31:35.714120 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:31:37.539472 kubelet[2437]: E0213 20:31:37.539400 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:31:38.603489 kubelet[2437]: E0213 20:31:38.603438 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:40.070934 systemd[1]: Started sshd@36-10.0.0.6:22-10.0.0.1:51942.service - OpenSSH per-connection server daemon (10.0.0.1:51942). Feb 13 20:31:40.108723 sshd[3337]: Accepted publickey for core from 10.0.0.1 port 51942 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:40.109926 sshd[3337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:40.113162 systemd-logind[1425]: New session 37 of user core. Feb 13 20:31:40.120440 systemd[1]: Started session-37.scope - Session 37 of User core. Feb 13 20:31:40.226361 sshd[3337]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:40.229426 systemd[1]: sshd@36-10.0.0.6:22-10.0.0.1:51942.service: Deactivated successfully. Feb 13 20:31:40.231008 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 20:31:40.231644 systemd-logind[1425]: Session 37 logged out. Waiting for processes to exit. Feb 13 20:31:40.232347 systemd-logind[1425]: Removed session 37. Feb 13 20:31:43.604919 kubelet[2437]: E0213 20:31:43.604762 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:45.236688 systemd[1]: Started sshd@37-10.0.0.6:22-10.0.0.1:35224.service - OpenSSH per-connection server daemon (10.0.0.1:35224). Feb 13 20:31:45.273787 sshd[3355]: Accepted publickey for core from 10.0.0.1 port 35224 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:45.274973 sshd[3355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:45.278449 systemd-logind[1425]: New session 38 of user core. Feb 13 20:31:45.289434 systemd[1]: Started session-38.scope - Session 38 of User core. Feb 13 20:31:45.393524 sshd[3355]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:45.396354 systemd-logind[1425]: Session 38 logged out. Waiting for processes to exit. Feb 13 20:31:45.396519 systemd[1]: sshd@37-10.0.0.6:22-10.0.0.1:35224.service: Deactivated successfully. Feb 13 20:31:45.397922 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 20:31:45.399454 systemd-logind[1425]: Removed session 38. Feb 13 20:31:48.605610 kubelet[2437]: E0213 20:31:48.605570 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:50.403812 systemd[1]: Started sshd@38-10.0.0.6:22-10.0.0.1:35240.service - OpenSSH per-connection server daemon (10.0.0.1:35240). Feb 13 20:31:50.440767 sshd[3370]: Accepted publickey for core from 10.0.0.1 port 35240 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:50.441913 sshd[3370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:50.445224 systemd-logind[1425]: New session 39 of user core. Feb 13 20:31:50.454492 systemd[1]: Started session-39.scope - Session 39 of User core. Feb 13 20:31:50.538039 kubelet[2437]: E0213 20:31:50.537999 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:31:50.539675 kubelet[2437]: E0213 20:31:50.539598 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:31:50.559526 sshd[3370]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:50.562912 systemd[1]: sshd@38-10.0.0.6:22-10.0.0.1:35240.service: Deactivated successfully. Feb 13 20:31:50.565159 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 20:31:50.565879 systemd-logind[1425]: Session 39 logged out. Waiting for processes to exit. Feb 13 20:31:50.566760 systemd-logind[1425]: Removed session 39. Feb 13 20:31:53.606310 kubelet[2437]: E0213 20:31:53.606265 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:31:55.569985 systemd[1]: Started sshd@39-10.0.0.6:22-10.0.0.1:54994.service - OpenSSH per-connection server daemon (10.0.0.1:54994). Feb 13 20:31:55.607722 sshd[3386]: Accepted publickey for core from 10.0.0.1 port 54994 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:31:55.608844 sshd[3386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:55.612794 systemd-logind[1425]: New session 40 of user core. Feb 13 20:31:55.622425 systemd[1]: Started session-40.scope - Session 40 of User core. Feb 13 20:31:55.726235 sshd[3386]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:55.729388 systemd[1]: sshd@39-10.0.0.6:22-10.0.0.1:54994.service: Deactivated successfully. Feb 13 20:31:55.731089 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 20:31:55.731740 systemd-logind[1425]: Session 40 logged out. Waiting for processes to exit. Feb 13 20:31:55.732500 systemd-logind[1425]: Removed session 40. Feb 13 20:31:58.607281 kubelet[2437]: E0213 20:31:58.607243 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:00.736692 systemd[1]: Started sshd@40-10.0.0.6:22-10.0.0.1:55010.service - OpenSSH per-connection server daemon (10.0.0.1:55010). Feb 13 20:32:00.773920 sshd[3401]: Accepted publickey for core from 10.0.0.1 port 55010 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:00.775073 sshd[3401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:00.778540 systemd-logind[1425]: New session 41 of user core. Feb 13 20:32:00.786435 systemd[1]: Started session-41.scope - Session 41 of User core. Feb 13 20:32:00.897411 sshd[3401]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:00.903697 systemd[1]: sshd@40-10.0.0.6:22-10.0.0.1:55010.service: Deactivated successfully. Feb 13 20:32:00.906529 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 20:32:00.907804 systemd-logind[1425]: Session 41 logged out. Waiting for processes to exit. Feb 13 20:32:00.919567 systemd[1]: Started sshd@41-10.0.0.6:22-10.0.0.1:55016.service - OpenSSH per-connection server daemon (10.0.0.1:55016). Feb 13 20:32:00.920549 systemd-logind[1425]: Removed session 41. Feb 13 20:32:00.953019 sshd[3417]: Accepted publickey for core from 10.0.0.1 port 55016 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:00.954196 sshd[3417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:00.957629 systemd-logind[1425]: New session 42 of user core. Feb 13 20:32:00.965452 systemd[1]: Started session-42.scope - Session 42 of User core. Feb 13 20:32:01.107047 sshd[3417]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:01.116034 systemd[1]: sshd@41-10.0.0.6:22-10.0.0.1:55016.service: Deactivated successfully. Feb 13 20:32:01.119233 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 20:32:01.121201 systemd-logind[1425]: Session 42 logged out. Waiting for processes to exit. Feb 13 20:32:01.130591 systemd[1]: Started sshd@42-10.0.0.6:22-10.0.0.1:55026.service - OpenSSH per-connection server daemon (10.0.0.1:55026). Feb 13 20:32:01.132275 systemd-logind[1425]: Removed session 42. Feb 13 20:32:01.172338 sshd[3429]: Accepted publickey for core from 10.0.0.1 port 55026 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:01.173669 sshd[3429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:01.177514 systemd-logind[1425]: New session 43 of user core. Feb 13 20:32:01.190474 systemd[1]: Started session-43.scope - Session 43 of User core. Feb 13 20:32:01.296543 sshd[3429]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:01.299922 systemd[1]: sshd@42-10.0.0.6:22-10.0.0.1:55026.service: Deactivated successfully. Feb 13 20:32:01.301522 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 20:32:01.302037 systemd-logind[1425]: Session 43 logged out. Waiting for processes to exit. Feb 13 20:32:01.302729 systemd-logind[1425]: Removed session 43. Feb 13 20:32:03.537871 kubelet[2437]: E0213 20:32:03.537533 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:03.538248 kubelet[2437]: E0213 20:32:03.538060 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:32:03.608631 kubelet[2437]: E0213 20:32:03.608570 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:04.537647 kubelet[2437]: E0213 20:32:04.537585 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:06.307015 systemd[1]: Started sshd@43-10.0.0.6:22-10.0.0.1:51016.service - OpenSSH per-connection server daemon (10.0.0.1:51016). Feb 13 20:32:06.344175 sshd[3443]: Accepted publickey for core from 10.0.0.1 port 51016 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:06.345386 sshd[3443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:06.348654 systemd-logind[1425]: New session 44 of user core. Feb 13 20:32:06.358500 systemd[1]: Started session-44.scope - Session 44 of User core. Feb 13 20:32:06.464464 sshd[3443]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:06.467378 systemd[1]: sshd@43-10.0.0.6:22-10.0.0.1:51016.service: Deactivated successfully. Feb 13 20:32:06.469745 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 20:32:06.470654 systemd-logind[1425]: Session 44 logged out. Waiting for processes to exit. Feb 13 20:32:06.471476 systemd-logind[1425]: Removed session 44. Feb 13 20:32:08.609667 kubelet[2437]: E0213 20:32:08.609621 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:11.475522 systemd[1]: Started sshd@44-10.0.0.6:22-10.0.0.1:51024.service - OpenSSH per-connection server daemon (10.0.0.1:51024). Feb 13 20:32:11.512594 sshd[3459]: Accepted publickey for core from 10.0.0.1 port 51024 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:11.513769 sshd[3459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:11.517374 systemd-logind[1425]: New session 45 of user core. Feb 13 20:32:11.523435 systemd[1]: Started session-45.scope - Session 45 of User core. Feb 13 20:32:11.630909 sshd[3459]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:11.634023 systemd[1]: sshd@44-10.0.0.6:22-10.0.0.1:51024.service: Deactivated successfully. Feb 13 20:32:11.635571 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 20:32:11.636108 systemd-logind[1425]: Session 45 logged out. Waiting for processes to exit. Feb 13 20:32:11.636843 systemd-logind[1425]: Removed session 45. Feb 13 20:32:13.610811 kubelet[2437]: E0213 20:32:13.610770 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:16.537556 kubelet[2437]: E0213 20:32:16.537523 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:16.538646 kubelet[2437]: E0213 20:32:16.538217 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:32:16.641814 systemd[1]: Started sshd@45-10.0.0.6:22-10.0.0.1:37332.service - OpenSSH per-connection server daemon (10.0.0.1:37332). Feb 13 20:32:16.678998 sshd[3474]: Accepted publickey for core from 10.0.0.1 port 37332 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:16.680114 sshd[3474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:16.684064 systemd-logind[1425]: New session 46 of user core. Feb 13 20:32:16.698505 systemd[1]: Started session-46.scope - Session 46 of User core. Feb 13 20:32:16.804989 sshd[3474]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:16.807958 systemd[1]: sshd@45-10.0.0.6:22-10.0.0.1:37332.service: Deactivated successfully. Feb 13 20:32:16.809628 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 20:32:16.810860 systemd-logind[1425]: Session 46 logged out. Waiting for processes to exit. Feb 13 20:32:16.811861 systemd-logind[1425]: Removed session 46. Feb 13 20:32:18.612037 kubelet[2437]: E0213 20:32:18.611991 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:20.538173 kubelet[2437]: E0213 20:32:20.538114 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:21.817675 systemd[1]: Started sshd@46-10.0.0.6:22-10.0.0.1:37344.service - OpenSSH per-connection server daemon (10.0.0.1:37344). Feb 13 20:32:21.855190 sshd[3488]: Accepted publickey for core from 10.0.0.1 port 37344 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:21.856434 sshd[3488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:21.859773 systemd-logind[1425]: New session 47 of user core. Feb 13 20:32:21.870439 systemd[1]: Started session-47.scope - Session 47 of User core. Feb 13 20:32:21.973493 sshd[3488]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:21.976502 systemd[1]: sshd@46-10.0.0.6:22-10.0.0.1:37344.service: Deactivated successfully. Feb 13 20:32:21.978759 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 20:32:21.979406 systemd-logind[1425]: Session 47 logged out. Waiting for processes to exit. Feb 13 20:32:21.980532 systemd-logind[1425]: Removed session 47. Feb 13 20:32:23.613447 kubelet[2437]: E0213 20:32:23.613402 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:26.538140 kubelet[2437]: E0213 20:32:26.538065 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:26.989755 systemd[1]: Started sshd@47-10.0.0.6:22-10.0.0.1:48338.service - OpenSSH per-connection server daemon (10.0.0.1:48338). Feb 13 20:32:27.026899 sshd[3502]: Accepted publickey for core from 10.0.0.1 port 48338 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:27.028115 sshd[3502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:27.031790 systemd-logind[1425]: New session 48 of user core. Feb 13 20:32:27.040429 systemd[1]: Started session-48.scope - Session 48 of User core. Feb 13 20:32:27.143331 sshd[3502]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:27.146526 systemd[1]: sshd@47-10.0.0.6:22-10.0.0.1:48338.service: Deactivated successfully. Feb 13 20:32:27.148647 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 20:32:27.149340 systemd-logind[1425]: Session 48 logged out. Waiting for processes to exit. Feb 13 20:32:27.150045 systemd-logind[1425]: Removed session 48. Feb 13 20:32:28.614483 kubelet[2437]: E0213 20:32:28.614432 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:31.540258 kubelet[2437]: E0213 20:32:31.540088 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:31.541050 kubelet[2437]: E0213 20:32:31.540954 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:32:32.154078 systemd[1]: Started sshd@48-10.0.0.6:22-10.0.0.1:48354.service - OpenSSH per-connection server daemon (10.0.0.1:48354). Feb 13 20:32:32.191332 sshd[3517]: Accepted publickey for core from 10.0.0.1 port 48354 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:32.192490 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:32.196126 systemd-logind[1425]: New session 49 of user core. Feb 13 20:32:32.205433 systemd[1]: Started session-49.scope - Session 49 of User core. Feb 13 20:32:32.313370 sshd[3517]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:32.316359 systemd[1]: sshd@48-10.0.0.6:22-10.0.0.1:48354.service: Deactivated successfully. Feb 13 20:32:32.318854 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 20:32:32.319560 systemd-logind[1425]: Session 49 logged out. Waiting for processes to exit. Feb 13 20:32:32.320344 systemd-logind[1425]: Removed session 49. Feb 13 20:32:33.615232 kubelet[2437]: E0213 20:32:33.615195 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:37.323776 systemd[1]: Started sshd@49-10.0.0.6:22-10.0.0.1:39750.service - OpenSSH per-connection server daemon (10.0.0.1:39750). Feb 13 20:32:37.361617 sshd[3534]: Accepted publickey for core from 10.0.0.1 port 39750 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:37.362841 sshd[3534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:37.366655 systemd-logind[1425]: New session 50 of user core. Feb 13 20:32:37.370444 systemd[1]: Started session-50.scope - Session 50 of User core. Feb 13 20:32:37.476471 sshd[3534]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:37.485652 systemd[1]: sshd@49-10.0.0.6:22-10.0.0.1:39750.service: Deactivated successfully. Feb 13 20:32:37.490737 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 20:32:37.491376 systemd-logind[1425]: Session 50 logged out. Waiting for processes to exit. Feb 13 20:32:37.492115 systemd-logind[1425]: Removed session 50. Feb 13 20:32:38.616671 kubelet[2437]: E0213 20:32:38.616636 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:42.484631 systemd[1]: Started sshd@50-10.0.0.6:22-10.0.0.1:57258.service - OpenSSH per-connection server daemon (10.0.0.1:57258). Feb 13 20:32:42.521972 sshd[3550]: Accepted publickey for core from 10.0.0.1 port 57258 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:42.523128 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:42.526755 systemd-logind[1425]: New session 51 of user core. Feb 13 20:32:42.536438 systemd[1]: Started session-51.scope - Session 51 of User core. Feb 13 20:32:42.638047 sshd[3550]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:42.641141 systemd[1]: sshd@50-10.0.0.6:22-10.0.0.1:57258.service: Deactivated successfully. Feb 13 20:32:42.642595 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 20:32:42.644072 systemd-logind[1425]: Session 51 logged out. Waiting for processes to exit. Feb 13 20:32:42.645690 systemd-logind[1425]: Removed session 51. Feb 13 20:32:43.617407 kubelet[2437]: E0213 20:32:43.617293 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:45.539812 kubelet[2437]: E0213 20:32:45.539753 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:46.537957 kubelet[2437]: E0213 20:32:46.537916 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:46.538693 kubelet[2437]: E0213 20:32:46.538625 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:32:47.648819 systemd[1]: Started sshd@51-10.0.0.6:22-10.0.0.1:57266.service - OpenSSH per-connection server daemon (10.0.0.1:57266). Feb 13 20:32:47.687196 sshd[3565]: Accepted publickey for core from 10.0.0.1 port 57266 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:47.688458 sshd[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:47.692314 systemd-logind[1425]: New session 52 of user core. Feb 13 20:32:47.699441 systemd[1]: Started session-52.scope - Session 52 of User core. Feb 13 20:32:47.805663 sshd[3565]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:47.808732 systemd[1]: sshd@51-10.0.0.6:22-10.0.0.1:57266.service: Deactivated successfully. Feb 13 20:32:47.810917 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 20:32:47.811642 systemd-logind[1425]: Session 52 logged out. Waiting for processes to exit. Feb 13 20:32:47.812685 systemd-logind[1425]: Removed session 52. Feb 13 20:32:48.618272 kubelet[2437]: E0213 20:32:48.618228 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:52.815800 systemd[1]: Started sshd@52-10.0.0.6:22-10.0.0.1:41652.service - OpenSSH per-connection server daemon (10.0.0.1:41652). Feb 13 20:32:52.853423 sshd[3579]: Accepted publickey for core from 10.0.0.1 port 41652 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:52.854587 sshd[3579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:52.858364 systemd-logind[1425]: New session 53 of user core. Feb 13 20:32:52.865432 systemd[1]: Started session-53.scope - Session 53 of User core. Feb 13 20:32:52.968439 sshd[3579]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:52.971675 systemd[1]: sshd@52-10.0.0.6:22-10.0.0.1:41652.service: Deactivated successfully. Feb 13 20:32:52.974110 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 20:32:52.975076 systemd-logind[1425]: Session 53 logged out. Waiting for processes to exit. Feb 13 20:32:52.975981 systemd-logind[1425]: Removed session 53. Feb 13 20:32:53.619397 kubelet[2437]: E0213 20:32:53.619358 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:57.982769 systemd[1]: Started sshd@53-10.0.0.6:22-10.0.0.1:41664.service - OpenSSH per-connection server daemon (10.0.0.1:41664). Feb 13 20:32:58.019934 sshd[3593]: Accepted publickey for core from 10.0.0.1 port 41664 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:32:58.021101 sshd[3593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:58.024818 systemd-logind[1425]: New session 54 of user core. Feb 13 20:32:58.032434 systemd[1]: Started session-54.scope - Session 54 of User core. Feb 13 20:32:58.137787 sshd[3593]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:58.140707 systemd[1]: sshd@53-10.0.0.6:22-10.0.0.1:41664.service: Deactivated successfully. Feb 13 20:32:58.143500 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 20:32:58.144262 systemd-logind[1425]: Session 54 logged out. Waiting for processes to exit. Feb 13 20:32:58.145261 systemd-logind[1425]: Removed session 54. Feb 13 20:32:58.620937 kubelet[2437]: E0213 20:32:58.620887 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:59.538120 kubelet[2437]: E0213 20:32:59.538090 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:32:59.538755 kubelet[2437]: E0213 20:32:59.538701 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:33:03.148426 systemd[1]: Started sshd@54-10.0.0.6:22-10.0.0.1:55228.service - OpenSSH per-connection server daemon (10.0.0.1:55228). Feb 13 20:33:03.185447 sshd[3608]: Accepted publickey for core from 10.0.0.1 port 55228 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:03.186613 sshd[3608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:03.189905 systemd-logind[1425]: New session 55 of user core. Feb 13 20:33:03.200420 systemd[1]: Started session-55.scope - Session 55 of User core. Feb 13 20:33:03.306030 sshd[3608]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:03.309418 systemd[1]: sshd@54-10.0.0.6:22-10.0.0.1:55228.service: Deactivated successfully. Feb 13 20:33:03.311952 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 20:33:03.313053 systemd-logind[1425]: Session 55 logged out. Waiting for processes to exit. Feb 13 20:33:03.313895 systemd-logind[1425]: Removed session 55. Feb 13 20:33:03.621648 kubelet[2437]: E0213 20:33:03.621608 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:08.316782 systemd[1]: Started sshd@55-10.0.0.6:22-10.0.0.1:55242.service - OpenSSH per-connection server daemon (10.0.0.1:55242). Feb 13 20:33:08.354384 sshd[3622]: Accepted publickey for core from 10.0.0.1 port 55242 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:08.355595 sshd[3622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:08.359005 systemd-logind[1425]: New session 56 of user core. Feb 13 20:33:08.376494 systemd[1]: Started session-56.scope - Session 56 of User core. Feb 13 20:33:08.483048 sshd[3622]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:08.486138 systemd[1]: sshd@55-10.0.0.6:22-10.0.0.1:55242.service: Deactivated successfully. Feb 13 20:33:08.487678 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 20:33:08.488836 systemd-logind[1425]: Session 56 logged out. Waiting for processes to exit. Feb 13 20:33:08.489788 systemd-logind[1425]: Removed session 56. Feb 13 20:33:08.623043 kubelet[2437]: E0213 20:33:08.622920 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:12.537769 kubelet[2437]: E0213 20:33:12.537729 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:33:12.537769 kubelet[2437]: E0213 20:33:12.538270 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:33:13.493743 systemd[1]: Started sshd@56-10.0.0.6:22-10.0.0.1:56008.service - OpenSSH per-connection server daemon (10.0.0.1:56008). Feb 13 20:33:13.537523 kubelet[2437]: E0213 20:33:13.537448 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:33:13.538369 sshd[3638]: Accepted publickey for core from 10.0.0.1 port 56008 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:13.539799 sshd[3638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:13.545746 systemd-logind[1425]: New session 57 of user core. Feb 13 20:33:13.555435 systemd[1]: Started session-57.scope - Session 57 of User core. Feb 13 20:33:13.624110 kubelet[2437]: E0213 20:33:13.624081 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:13.663289 sshd[3638]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:13.666451 systemd[1]: sshd@56-10.0.0.6:22-10.0.0.1:56008.service: Deactivated successfully. Feb 13 20:33:13.668817 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 20:33:13.669588 systemd-logind[1425]: Session 57 logged out. Waiting for processes to exit. Feb 13 20:33:13.670513 systemd-logind[1425]: Removed session 57. Feb 13 20:33:18.625356 kubelet[2437]: E0213 20:33:18.625289 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:18.673817 systemd[1]: Started sshd@57-10.0.0.6:22-10.0.0.1:56020.service - OpenSSH per-connection server daemon (10.0.0.1:56020). Feb 13 20:33:18.711058 sshd[3652]: Accepted publickey for core from 10.0.0.1 port 56020 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:18.712187 sshd[3652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:18.715815 systemd-logind[1425]: New session 58 of user core. Feb 13 20:33:18.727453 systemd[1]: Started session-58.scope - Session 58 of User core. Feb 13 20:33:18.834600 sshd[3652]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:18.837578 systemd[1]: sshd@57-10.0.0.6:22-10.0.0.1:56020.service: Deactivated successfully. Feb 13 20:33:18.839172 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 20:33:18.840375 systemd-logind[1425]: Session 58 logged out. Waiting for processes to exit. Feb 13 20:33:18.841253 systemd-logind[1425]: Removed session 58. Feb 13 20:33:23.538615 kubelet[2437]: E0213 20:33:23.538516 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:33:23.539363 kubelet[2437]: E0213 20:33:23.539210 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:33:23.626670 kubelet[2437]: E0213 20:33:23.626634 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:23.848998 systemd[1]: Started sshd@58-10.0.0.6:22-10.0.0.1:40446.service - OpenSSH per-connection server daemon (10.0.0.1:40446). Feb 13 20:33:23.886603 sshd[3667]: Accepted publickey for core from 10.0.0.1 port 40446 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:23.887756 sshd[3667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:23.891651 systemd-logind[1425]: New session 59 of user core. Feb 13 20:33:23.908443 systemd[1]: Started session-59.scope - Session 59 of User core. Feb 13 20:33:24.015435 sshd[3667]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:24.018477 systemd[1]: sshd@58-10.0.0.6:22-10.0.0.1:40446.service: Deactivated successfully. Feb 13 20:33:24.020012 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 20:33:24.020580 systemd-logind[1425]: Session 59 logged out. Waiting for processes to exit. Feb 13 20:33:24.021392 systemd-logind[1425]: Removed session 59. Feb 13 20:33:28.627431 kubelet[2437]: E0213 20:33:28.627374 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:29.026714 systemd[1]: Started sshd@59-10.0.0.6:22-10.0.0.1:40458.service - OpenSSH per-connection server daemon (10.0.0.1:40458). Feb 13 20:33:29.064045 sshd[3681]: Accepted publickey for core from 10.0.0.1 port 40458 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:29.065222 sshd[3681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:29.068624 systemd-logind[1425]: New session 60 of user core. Feb 13 20:33:29.078496 systemd[1]: Started session-60.scope - Session 60 of User core. Feb 13 20:33:29.182422 sshd[3681]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:29.185535 systemd[1]: sshd@59-10.0.0.6:22-10.0.0.1:40458.service: Deactivated successfully. Feb 13 20:33:29.187912 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 20:33:29.188543 systemd-logind[1425]: Session 60 logged out. Waiting for processes to exit. Feb 13 20:33:29.189328 systemd-logind[1425]: Removed session 60. Feb 13 20:33:33.628202 kubelet[2437]: E0213 20:33:33.628158 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:34.192717 systemd[1]: Started sshd@60-10.0.0.6:22-10.0.0.1:47020.service - OpenSSH per-connection server daemon (10.0.0.1:47020). Feb 13 20:33:34.229582 sshd[3697]: Accepted publickey for core from 10.0.0.1 port 47020 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:34.230735 sshd[3697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:34.234507 systemd-logind[1425]: New session 61 of user core. Feb 13 20:33:34.244450 systemd[1]: Started session-61.scope - Session 61 of User core. Feb 13 20:33:34.350994 sshd[3697]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:34.354021 systemd[1]: sshd@60-10.0.0.6:22-10.0.0.1:47020.service: Deactivated successfully. Feb 13 20:33:34.355553 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 20:33:34.356103 systemd-logind[1425]: Session 61 logged out. Waiting for processes to exit. Feb 13 20:33:34.356899 systemd-logind[1425]: Removed session 61. Feb 13 20:33:36.538159 kubelet[2437]: E0213 20:33:36.538121 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:33:36.538904 kubelet[2437]: E0213 20:33:36.538846 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:33:38.629676 kubelet[2437]: E0213 20:33:38.629623 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:39.361806 systemd[1]: Started sshd@61-10.0.0.6:22-10.0.0.1:47028.service - OpenSSH per-connection server daemon (10.0.0.1:47028). Feb 13 20:33:39.399179 sshd[3712]: Accepted publickey for core from 10.0.0.1 port 47028 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:39.400407 sshd[3712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:39.405813 systemd-logind[1425]: New session 62 of user core. Feb 13 20:33:39.412506 systemd[1]: Started session-62.scope - Session 62 of User core. Feb 13 20:33:39.518916 sshd[3712]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:39.521943 systemd[1]: sshd@61-10.0.0.6:22-10.0.0.1:47028.service: Deactivated successfully. Feb 13 20:33:39.523601 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 20:33:39.524164 systemd-logind[1425]: Session 62 logged out. Waiting for processes to exit. Feb 13 20:33:39.524979 systemd-logind[1425]: Removed session 62. Feb 13 20:33:43.630715 kubelet[2437]: E0213 20:33:43.630660 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:44.531912 systemd[1]: Started sshd@62-10.0.0.6:22-10.0.0.1:43182.service - OpenSSH per-connection server daemon (10.0.0.1:43182). Feb 13 20:33:44.568808 sshd[3732]: Accepted publickey for core from 10.0.0.1 port 43182 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:44.569900 sshd[3732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:44.574046 systemd-logind[1425]: New session 63 of user core. Feb 13 20:33:44.583456 systemd[1]: Started session-63.scope - Session 63 of User core. Feb 13 20:33:44.690356 sshd[3732]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:44.693472 systemd[1]: sshd@62-10.0.0.6:22-10.0.0.1:43182.service: Deactivated successfully. Feb 13 20:33:44.695168 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 20:33:44.695776 systemd-logind[1425]: Session 63 logged out. Waiting for processes to exit. Feb 13 20:33:44.696573 systemd-logind[1425]: Removed session 63. Feb 13 20:33:45.537714 kubelet[2437]: E0213 20:33:45.537675 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:33:46.537679 kubelet[2437]: E0213 20:33:46.537643 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:33:48.632050 kubelet[2437]: E0213 20:33:48.632013 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:49.540087 kubelet[2437]: E0213 20:33:49.539735 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:33:49.540087 kubelet[2437]: E0213 20:33:49.539909 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:33:49.540475 kubelet[2437]: E0213 20:33:49.540407 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:33:49.700959 systemd[1]: Started sshd@63-10.0.0.6:22-10.0.0.1:43194.service - OpenSSH per-connection server daemon (10.0.0.1:43194). Feb 13 20:33:49.738454 sshd[3746]: Accepted publickey for core from 10.0.0.1 port 43194 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:49.739656 sshd[3746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:49.742906 systemd-logind[1425]: New session 64 of user core. Feb 13 20:33:49.753440 systemd[1]: Started session-64.scope - Session 64 of User core. Feb 13 20:33:49.859672 sshd[3746]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:49.862887 systemd[1]: sshd@63-10.0.0.6:22-10.0.0.1:43194.service: Deactivated successfully. Feb 13 20:33:49.865562 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 20:33:49.866252 systemd-logind[1425]: Session 64 logged out. Waiting for processes to exit. Feb 13 20:33:49.867174 systemd-logind[1425]: Removed session 64. Feb 13 20:33:53.633258 kubelet[2437]: E0213 20:33:53.633217 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:33:54.870717 systemd[1]: Started sshd@64-10.0.0.6:22-10.0.0.1:48940.service - OpenSSH per-connection server daemon (10.0.0.1:48940). Feb 13 20:33:54.908224 sshd[3760]: Accepted publickey for core from 10.0.0.1 port 48940 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:33:54.909397 sshd[3760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:33:54.913271 systemd-logind[1425]: New session 65 of user core. Feb 13 20:33:54.919430 systemd[1]: Started session-65.scope - Session 65 of User core. Feb 13 20:33:55.025043 sshd[3760]: pam_unix(sshd:session): session closed for user core Feb 13 20:33:55.028147 systemd[1]: sshd@64-10.0.0.6:22-10.0.0.1:48940.service: Deactivated successfully. Feb 13 20:33:55.029834 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 20:33:55.030615 systemd-logind[1425]: Session 65 logged out. Waiting for processes to exit. Feb 13 20:33:55.031498 systemd-logind[1425]: Removed session 65. Feb 13 20:33:58.634287 kubelet[2437]: E0213 20:33:58.634248 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:00.035760 systemd[1]: Started sshd@65-10.0.0.6:22-10.0.0.1:48952.service - OpenSSH per-connection server daemon (10.0.0.1:48952). Feb 13 20:34:00.072957 sshd[3774]: Accepted publickey for core from 10.0.0.1 port 48952 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:00.074166 sshd[3774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:00.077427 systemd-logind[1425]: New session 66 of user core. Feb 13 20:34:00.089452 systemd[1]: Started session-66.scope - Session 66 of User core. Feb 13 20:34:00.194829 sshd[3774]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:00.197983 systemd[1]: sshd@65-10.0.0.6:22-10.0.0.1:48952.service: Deactivated successfully. Feb 13 20:34:00.199533 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 20:34:00.200141 systemd-logind[1425]: Session 66 logged out. Waiting for processes to exit. Feb 13 20:34:00.200921 systemd-logind[1425]: Removed session 66. Feb 13 20:34:03.537755 kubelet[2437]: E0213 20:34:03.537672 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:34:03.538878 kubelet[2437]: E0213 20:34:03.538106 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:34:03.635392 kubelet[2437]: E0213 20:34:03.635356 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:05.206904 systemd[1]: Started sshd@66-10.0.0.6:22-10.0.0.1:59556.service - OpenSSH per-connection server daemon (10.0.0.1:59556). Feb 13 20:34:05.244290 sshd[3789]: Accepted publickey for core from 10.0.0.1 port 59556 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:05.245446 sshd[3789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:05.249163 systemd-logind[1425]: New session 67 of user core. Feb 13 20:34:05.259491 systemd[1]: Started session-67.scope - Session 67 of User core. Feb 13 20:34:05.365016 sshd[3789]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:05.368140 systemd[1]: sshd@66-10.0.0.6:22-10.0.0.1:59556.service: Deactivated successfully. Feb 13 20:34:05.369741 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 20:34:05.370324 systemd-logind[1425]: Session 67 logged out. Waiting for processes to exit. Feb 13 20:34:05.371065 systemd-logind[1425]: Removed session 67. Feb 13 20:34:08.636116 kubelet[2437]: E0213 20:34:08.636068 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:10.375842 systemd[1]: Started sshd@67-10.0.0.6:22-10.0.0.1:59570.service - OpenSSH per-connection server daemon (10.0.0.1:59570). Feb 13 20:34:10.412950 sshd[3805]: Accepted publickey for core from 10.0.0.1 port 59570 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:10.414102 sshd[3805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:10.417547 systemd-logind[1425]: New session 68 of user core. Feb 13 20:34:10.432446 systemd[1]: Started session-68.scope - Session 68 of User core. Feb 13 20:34:10.539768 sshd[3805]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:10.542860 systemd[1]: sshd@67-10.0.0.6:22-10.0.0.1:59570.service: Deactivated successfully. Feb 13 20:34:10.544438 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 20:34:10.545214 systemd-logind[1425]: Session 68 logged out. Waiting for processes to exit. Feb 13 20:34:10.546067 systemd-logind[1425]: Removed session 68. Feb 13 20:34:13.637194 kubelet[2437]: E0213 20:34:13.637159 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:15.550016 systemd[1]: Started sshd@68-10.0.0.6:22-10.0.0.1:50446.service - OpenSSH per-connection server daemon (10.0.0.1:50446). Feb 13 20:34:15.587347 sshd[3819]: Accepted publickey for core from 10.0.0.1 port 50446 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:15.588482 sshd[3819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:15.592187 systemd-logind[1425]: New session 69 of user core. Feb 13 20:34:15.601437 systemd[1]: Started session-69.scope - Session 69 of User core. Feb 13 20:34:15.708726 sshd[3819]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:15.711738 systemd[1]: sshd@68-10.0.0.6:22-10.0.0.1:50446.service: Deactivated successfully. Feb 13 20:34:15.714738 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 20:34:15.715290 systemd-logind[1425]: Session 69 logged out. Waiting for processes to exit. Feb 13 20:34:15.716033 systemd-logind[1425]: Removed session 69. Feb 13 20:34:16.538015 kubelet[2437]: E0213 20:34:16.537985 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:34:16.539052 containerd[1443]: time="2025-02-13T20:34:16.539015733Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 20:34:17.850053 containerd[1443]: time="2025-02-13T20:34:17.849999377Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:34:17.850451 containerd[1443]: time="2025-02-13T20:34:17.850079778Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=13093" Feb 13 20:34:17.850494 kubelet[2437]: E0213 20:34:17.850196 2437 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:34:17.850494 kubelet[2437]: E0213 20:34:17.850240 2437 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:34:17.850723 kubelet[2437]: E0213 20:34:17.850380 2437 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnvjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-sgxqm_kube-flannel(7d8d9a1b-1c8a-4252-bae8-b1cf43294240): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:34:17.851970 kubelet[2437]: E0213 20:34:17.851921 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:34:18.638710 kubelet[2437]: E0213 20:34:18.638674 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:20.719809 systemd[1]: Started sshd@69-10.0.0.6:22-10.0.0.1:50456.service - OpenSSH per-connection server daemon (10.0.0.1:50456). Feb 13 20:34:20.756914 sshd[3835]: Accepted publickey for core from 10.0.0.1 port 50456 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:20.758025 sshd[3835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:20.761171 systemd-logind[1425]: New session 70 of user core. Feb 13 20:34:20.776432 systemd[1]: Started session-70.scope - Session 70 of User core. Feb 13 20:34:20.882490 sshd[3835]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:20.885779 systemd[1]: sshd@69-10.0.0.6:22-10.0.0.1:50456.service: Deactivated successfully. Feb 13 20:34:20.887505 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 20:34:20.888053 systemd-logind[1425]: Session 70 logged out. Waiting for processes to exit. Feb 13 20:34:20.888767 systemd-logind[1425]: Removed session 70. Feb 13 20:34:23.639465 kubelet[2437]: E0213 20:34:23.639430 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:25.896763 systemd[1]: Started sshd@70-10.0.0.6:22-10.0.0.1:34520.service - OpenSSH per-connection server daemon (10.0.0.1:34520). Feb 13 20:34:25.933778 sshd[3850]: Accepted publickey for core from 10.0.0.1 port 34520 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:25.934948 sshd[3850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:25.938162 systemd-logind[1425]: New session 71 of user core. Feb 13 20:34:25.947447 systemd[1]: Started session-71.scope - Session 71 of User core. Feb 13 20:34:26.054613 sshd[3850]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:26.057831 systemd[1]: sshd@70-10.0.0.6:22-10.0.0.1:34520.service: Deactivated successfully. Feb 13 20:34:26.059792 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 20:34:26.060612 systemd-logind[1425]: Session 71 logged out. Waiting for processes to exit. Feb 13 20:34:26.061583 systemd-logind[1425]: Removed session 71. Feb 13 20:34:28.538408 kubelet[2437]: E0213 20:34:28.538293 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:34:28.539860 kubelet[2437]: E0213 20:34:28.539791 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:34:28.640868 kubelet[2437]: E0213 20:34:28.640816 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:31.064923 systemd[1]: Started sshd@71-10.0.0.6:22-10.0.0.1:34534.service - OpenSSH per-connection server daemon (10.0.0.1:34534). Feb 13 20:34:31.102220 sshd[3865]: Accepted publickey for core from 10.0.0.1 port 34534 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:31.103470 sshd[3865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:31.106932 systemd-logind[1425]: New session 72 of user core. Feb 13 20:34:31.119450 systemd[1]: Started session-72.scope - Session 72 of User core. Feb 13 20:34:31.225396 sshd[3865]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:31.228503 systemd[1]: sshd@71-10.0.0.6:22-10.0.0.1:34534.service: Deactivated successfully. Feb 13 20:34:31.231987 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 20:34:31.232652 systemd-logind[1425]: Session 72 logged out. Waiting for processes to exit. Feb 13 20:34:31.234543 systemd-logind[1425]: Removed session 72. Feb 13 20:34:33.641758 kubelet[2437]: E0213 20:34:33.641719 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:34.537786 kubelet[2437]: E0213 20:34:34.537430 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:34:36.235708 systemd[1]: Started sshd@72-10.0.0.6:22-10.0.0.1:55918.service - OpenSSH per-connection server daemon (10.0.0.1:55918). Feb 13 20:34:36.273046 sshd[3882]: Accepted publickey for core from 10.0.0.1 port 55918 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:36.274183 sshd[3882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:36.278574 systemd-logind[1425]: New session 73 of user core. Feb 13 20:34:36.290451 systemd[1]: Started session-73.scope - Session 73 of User core. Feb 13 20:34:36.395634 sshd[3882]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:36.398631 systemd[1]: sshd@72-10.0.0.6:22-10.0.0.1:55918.service: Deactivated successfully. Feb 13 20:34:36.401011 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 20:34:36.402088 systemd-logind[1425]: Session 73 logged out. Waiting for processes to exit. Feb 13 20:34:36.403465 systemd-logind[1425]: Removed session 73. Feb 13 20:34:38.642499 kubelet[2437]: E0213 20:34:38.642465 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:41.407258 systemd[1]: Started sshd@73-10.0.0.6:22-10.0.0.1:55934.service - OpenSSH per-connection server daemon (10.0.0.1:55934). Feb 13 20:34:41.444490 sshd[3899]: Accepted publickey for core from 10.0.0.1 port 55934 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:41.445722 sshd[3899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:41.449111 systemd-logind[1425]: New session 74 of user core. Feb 13 20:34:41.460436 systemd[1]: Started session-74.scope - Session 74 of User core. Feb 13 20:34:41.565330 sshd[3899]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:41.568349 systemd[1]: sshd@73-10.0.0.6:22-10.0.0.1:55934.service: Deactivated successfully. Feb 13 20:34:41.570404 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 20:34:41.570964 systemd-logind[1425]: Session 74 logged out. Waiting for processes to exit. Feb 13 20:34:41.571743 systemd-logind[1425]: Removed session 74. Feb 13 20:34:43.538845 kubelet[2437]: E0213 20:34:43.538798 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:34:43.539775 kubelet[2437]: E0213 20:34:43.539729 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:34:43.643249 kubelet[2437]: E0213 20:34:43.643219 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:46.576894 systemd[1]: Started sshd@74-10.0.0.6:22-10.0.0.1:38450.service - OpenSSH per-connection server daemon (10.0.0.1:38450). Feb 13 20:34:46.613684 sshd[3913]: Accepted publickey for core from 10.0.0.1 port 38450 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:46.614827 sshd[3913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:46.618378 systemd-logind[1425]: New session 75 of user core. Feb 13 20:34:46.630426 systemd[1]: Started session-75.scope - Session 75 of User core. Feb 13 20:34:46.735465 sshd[3913]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:46.738611 systemd[1]: sshd@74-10.0.0.6:22-10.0.0.1:38450.service: Deactivated successfully. Feb 13 20:34:46.740844 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 20:34:46.741521 systemd-logind[1425]: Session 75 logged out. Waiting for processes to exit. Feb 13 20:34:46.742380 systemd-logind[1425]: Removed session 75. Feb 13 20:34:48.644135 kubelet[2437]: E0213 20:34:48.644088 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:51.747688 systemd[1]: Started sshd@75-10.0.0.6:22-10.0.0.1:38458.service - OpenSSH per-connection server daemon (10.0.0.1:38458). Feb 13 20:34:51.785129 sshd[3927]: Accepted publickey for core from 10.0.0.1 port 38458 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:51.786267 sshd[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:51.789944 systemd-logind[1425]: New session 76 of user core. Feb 13 20:34:51.796451 systemd[1]: Started session-76.scope - Session 76 of User core. Feb 13 20:34:51.903605 sshd[3927]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:51.906655 systemd[1]: sshd@75-10.0.0.6:22-10.0.0.1:38458.service: Deactivated successfully. Feb 13 20:34:51.908240 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 20:34:51.908895 systemd-logind[1425]: Session 76 logged out. Waiting for processes to exit. Feb 13 20:34:51.909653 systemd-logind[1425]: Removed session 76. Feb 13 20:34:52.537780 kubelet[2437]: E0213 20:34:52.537736 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:34:53.645468 kubelet[2437]: E0213 20:34:53.645377 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:34:56.537876 kubelet[2437]: E0213 20:34:56.537816 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:34:56.538571 kubelet[2437]: E0213 20:34:56.538532 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:34:56.914711 systemd[1]: Started sshd@76-10.0.0.6:22-10.0.0.1:39644.service - OpenSSH per-connection server daemon (10.0.0.1:39644). Feb 13 20:34:56.952012 sshd[3943]: Accepted publickey for core from 10.0.0.1 port 39644 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:34:56.953215 sshd[3943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:34:56.957089 systemd-logind[1425]: New session 77 of user core. Feb 13 20:34:56.971442 systemd[1]: Started session-77.scope - Session 77 of User core. Feb 13 20:34:57.078521 sshd[3943]: pam_unix(sshd:session): session closed for user core Feb 13 20:34:57.081636 systemd[1]: sshd@76-10.0.0.6:22-10.0.0.1:39644.service: Deactivated successfully. Feb 13 20:34:57.083810 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 20:34:57.084502 systemd-logind[1425]: Session 77 logged out. Waiting for processes to exit. Feb 13 20:34:57.085603 systemd-logind[1425]: Removed session 77. Feb 13 20:34:58.647068 kubelet[2437]: E0213 20:34:58.647024 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:02.088906 systemd[1]: Started sshd@77-10.0.0.6:22-10.0.0.1:39652.service - OpenSSH per-connection server daemon (10.0.0.1:39652). Feb 13 20:35:02.127039 sshd[3958]: Accepted publickey for core from 10.0.0.1 port 39652 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:02.128278 sshd[3958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:02.132595 systemd-logind[1425]: New session 78 of user core. Feb 13 20:35:02.142447 systemd[1]: Started session-78.scope - Session 78 of User core. Feb 13 20:35:02.249717 sshd[3958]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:02.258680 systemd[1]: sshd@77-10.0.0.6:22-10.0.0.1:39652.service: Deactivated successfully. Feb 13 20:35:02.261659 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 20:35:02.262860 systemd-logind[1425]: Session 78 logged out. Waiting for processes to exit. Feb 13 20:35:02.272627 systemd[1]: Started sshd@78-10.0.0.6:22-10.0.0.1:39666.service - OpenSSH per-connection server daemon (10.0.0.1:39666). Feb 13 20:35:02.273487 systemd-logind[1425]: Removed session 78. Feb 13 20:35:02.305616 sshd[3973]: Accepted publickey for core from 10.0.0.1 port 39666 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:02.306861 sshd[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:02.310530 systemd-logind[1425]: New session 79 of user core. Feb 13 20:35:02.320418 systemd[1]: Started session-79.scope - Session 79 of User core. Feb 13 20:35:02.484185 sshd[3973]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:02.492533 systemd[1]: sshd@78-10.0.0.6:22-10.0.0.1:39666.service: Deactivated successfully. Feb 13 20:35:02.495552 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 20:35:02.496632 systemd-logind[1425]: Session 79 logged out. Waiting for processes to exit. Feb 13 20:35:02.497671 systemd[1]: Started sshd@79-10.0.0.6:22-10.0.0.1:43544.service - OpenSSH per-connection server daemon (10.0.0.1:43544). Feb 13 20:35:02.498380 systemd-logind[1425]: Removed session 79. Feb 13 20:35:02.535469 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 43544 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:02.536649 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:02.540760 systemd-logind[1425]: New session 80 of user core. Feb 13 20:35:02.546488 systemd[1]: Started session-80.scope - Session 80 of User core. Feb 13 20:35:03.124894 sshd[3987]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:03.132928 systemd[1]: sshd@79-10.0.0.6:22-10.0.0.1:43544.service: Deactivated successfully. Feb 13 20:35:03.137409 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 20:35:03.141634 systemd-logind[1425]: Session 80 logged out. Waiting for processes to exit. Feb 13 20:35:03.147571 systemd[1]: Started sshd@80-10.0.0.6:22-10.0.0.1:43550.service - OpenSSH per-connection server daemon (10.0.0.1:43550). Feb 13 20:35:03.148568 systemd-logind[1425]: Removed session 80. Feb 13 20:35:03.182427 sshd[4010]: Accepted publickey for core from 10.0.0.1 port 43550 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:03.183577 sshd[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:03.186968 systemd-logind[1425]: New session 81 of user core. Feb 13 20:35:03.201423 systemd[1]: Started session-81.scope - Session 81 of User core. Feb 13 20:35:03.403177 sshd[4010]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:03.413040 systemd[1]: sshd@80-10.0.0.6:22-10.0.0.1:43550.service: Deactivated successfully. Feb 13 20:35:03.414496 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 20:35:03.415787 systemd-logind[1425]: Session 81 logged out. Waiting for processes to exit. Feb 13 20:35:03.427648 systemd[1]: Started sshd@81-10.0.0.6:22-10.0.0.1:43564.service - OpenSSH per-connection server daemon (10.0.0.1:43564). Feb 13 20:35:03.429004 systemd-logind[1425]: Removed session 81. Feb 13 20:35:03.461550 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 43564 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:03.462865 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:03.466577 systemd-logind[1425]: New session 82 of user core. Feb 13 20:35:03.477515 systemd[1]: Started session-82.scope - Session 82 of User core. Feb 13 20:35:03.539113 kubelet[2437]: E0213 20:35:03.538506 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:35:03.588529 sshd[4023]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:03.591731 systemd[1]: sshd@81-10.0.0.6:22-10.0.0.1:43564.service: Deactivated successfully. Feb 13 20:35:03.593457 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 20:35:03.595903 systemd-logind[1425]: Session 82 logged out. Waiting for processes to exit. Feb 13 20:35:03.596659 systemd-logind[1425]: Removed session 82. Feb 13 20:35:03.648234 kubelet[2437]: E0213 20:35:03.648166 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:07.538828 kubelet[2437]: E0213 20:35:07.538786 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:35:07.539867 kubelet[2437]: E0213 20:35:07.539671 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:35:08.537796 kubelet[2437]: E0213 20:35:08.537753 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:35:08.599722 systemd[1]: Started sshd@82-10.0.0.6:22-10.0.0.1:43576.service - OpenSSH per-connection server daemon (10.0.0.1:43576). Feb 13 20:35:08.637945 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 43576 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:08.639113 sshd[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:08.642956 systemd-logind[1425]: New session 83 of user core. Feb 13 20:35:08.649154 kubelet[2437]: E0213 20:35:08.649116 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:08.650433 systemd[1]: Started session-83.scope - Session 83 of User core. Feb 13 20:35:08.756217 sshd[4037]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:08.758683 systemd[1]: sshd@82-10.0.0.6:22-10.0.0.1:43576.service: Deactivated successfully. Feb 13 20:35:08.760284 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 20:35:08.762549 systemd-logind[1425]: Session 83 logged out. Waiting for processes to exit. Feb 13 20:35:08.763345 systemd-logind[1425]: Removed session 83. Feb 13 20:35:13.649969 kubelet[2437]: E0213 20:35:13.649934 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:13.770844 systemd[1]: Started sshd@83-10.0.0.6:22-10.0.0.1:47376.service - OpenSSH per-connection server daemon (10.0.0.1:47376). Feb 13 20:35:13.808217 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 47376 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:13.809420 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:13.813260 systemd-logind[1425]: New session 84 of user core. Feb 13 20:35:13.820456 systemd[1]: Started session-84.scope - Session 84 of User core. Feb 13 20:35:13.925944 sshd[4054]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:13.929461 systemd[1]: sshd@83-10.0.0.6:22-10.0.0.1:47376.service: Deactivated successfully. Feb 13 20:35:13.932124 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 20:35:13.934075 systemd-logind[1425]: Session 84 logged out. Waiting for processes to exit. Feb 13 20:35:13.934920 systemd-logind[1425]: Removed session 84. Feb 13 20:35:18.650931 kubelet[2437]: E0213 20:35:18.650880 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:18.936813 systemd[1]: Started sshd@84-10.0.0.6:22-10.0.0.1:47378.service - OpenSSH per-connection server daemon (10.0.0.1:47378). Feb 13 20:35:18.974078 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 47378 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:18.975272 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:18.979078 systemd-logind[1425]: New session 85 of user core. Feb 13 20:35:18.985460 systemd[1]: Started session-85.scope - Session 85 of User core. Feb 13 20:35:19.089588 sshd[4068]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:19.092566 systemd[1]: sshd@84-10.0.0.6:22-10.0.0.1:47378.service: Deactivated successfully. Feb 13 20:35:19.094164 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 20:35:19.094747 systemd-logind[1425]: Session 85 logged out. Waiting for processes to exit. Feb 13 20:35:19.095580 systemd-logind[1425]: Removed session 85. Feb 13 20:35:21.537889 kubelet[2437]: E0213 20:35:21.537849 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:35:21.538651 kubelet[2437]: E0213 20:35:21.538598 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:35:23.651831 kubelet[2437]: E0213 20:35:23.651786 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:24.103801 systemd[1]: Started sshd@85-10.0.0.6:22-10.0.0.1:50816.service - OpenSSH per-connection server daemon (10.0.0.1:50816). Feb 13 20:35:24.141129 sshd[4083]: Accepted publickey for core from 10.0.0.1 port 50816 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:24.142432 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:24.146071 systemd-logind[1425]: New session 86 of user core. Feb 13 20:35:24.153446 systemd[1]: Started session-86.scope - Session 86 of User core. Feb 13 20:35:24.257252 sshd[4083]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:24.260386 systemd[1]: sshd@85-10.0.0.6:22-10.0.0.1:50816.service: Deactivated successfully. Feb 13 20:35:24.262673 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 20:35:24.263351 systemd-logind[1425]: Session 86 logged out. Waiting for processes to exit. Feb 13 20:35:24.264048 systemd-logind[1425]: Removed session 86. Feb 13 20:35:28.652679 kubelet[2437]: E0213 20:35:28.652627 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:29.271765 systemd[1]: Started sshd@86-10.0.0.6:22-10.0.0.1:50832.service - OpenSSH per-connection server daemon (10.0.0.1:50832). Feb 13 20:35:29.308892 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 50832 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:29.310110 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:29.313750 systemd-logind[1425]: New session 87 of user core. Feb 13 20:35:29.326518 systemd[1]: Started session-87.scope - Session 87 of User core. Feb 13 20:35:29.431112 sshd[4097]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:29.434222 systemd[1]: sshd@86-10.0.0.6:22-10.0.0.1:50832.service: Deactivated successfully. Feb 13 20:35:29.436327 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 20:35:29.436933 systemd-logind[1425]: Session 87 logged out. Waiting for processes to exit. Feb 13 20:35:29.437976 systemd-logind[1425]: Removed session 87. Feb 13 20:35:33.653679 kubelet[2437]: E0213 20:35:33.653520 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:34.441839 systemd[1]: Started sshd@87-10.0.0.6:22-10.0.0.1:42570.service - OpenSSH per-connection server daemon (10.0.0.1:42570). Feb 13 20:35:34.479474 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 42570 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:34.480646 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:34.484031 systemd-logind[1425]: New session 88 of user core. Feb 13 20:35:34.495455 systemd[1]: Started session-88.scope - Session 88 of User core. Feb 13 20:35:34.603040 sshd[4113]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:34.606259 systemd[1]: sshd@87-10.0.0.6:22-10.0.0.1:42570.service: Deactivated successfully. Feb 13 20:35:34.607831 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 20:35:34.610024 systemd-logind[1425]: Session 88 logged out. Waiting for processes to exit. Feb 13 20:35:34.610881 systemd-logind[1425]: Removed session 88. Feb 13 20:35:36.537864 kubelet[2437]: E0213 20:35:36.537818 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:35:36.538555 kubelet[2437]: E0213 20:35:36.538515 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:35:38.654488 kubelet[2437]: E0213 20:35:38.654443 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:39.616862 systemd[1]: Started sshd@88-10.0.0.6:22-10.0.0.1:42582.service - OpenSSH per-connection server daemon (10.0.0.1:42582). Feb 13 20:35:39.654725 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 42582 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:39.655930 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:39.659464 systemd-logind[1425]: New session 89 of user core. Feb 13 20:35:39.668451 systemd[1]: Started session-89.scope - Session 89 of User core. Feb 13 20:35:39.773442 sshd[4129]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:39.776535 systemd[1]: sshd@88-10.0.0.6:22-10.0.0.1:42582.service: Deactivated successfully. Feb 13 20:35:39.778719 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 20:35:39.779328 systemd-logind[1425]: Session 89 logged out. Waiting for processes to exit. Feb 13 20:35:39.780515 systemd-logind[1425]: Removed session 89. Feb 13 20:35:43.656029 kubelet[2437]: E0213 20:35:43.655969 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:44.788870 systemd[1]: Started sshd@89-10.0.0.6:22-10.0.0.1:59100.service - OpenSSH per-connection server daemon (10.0.0.1:59100). Feb 13 20:35:44.826050 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 59100 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:44.827402 sshd[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:44.831521 systemd-logind[1425]: New session 90 of user core. Feb 13 20:35:44.841489 systemd[1]: Started session-90.scope - Session 90 of User core. Feb 13 20:35:44.946980 sshd[4145]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:44.950005 systemd[1]: sshd@89-10.0.0.6:22-10.0.0.1:59100.service: Deactivated successfully. Feb 13 20:35:44.951617 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 20:35:44.952219 systemd-logind[1425]: Session 90 logged out. Waiting for processes to exit. Feb 13 20:35:44.952974 systemd-logind[1425]: Removed session 90. Feb 13 20:35:48.657123 kubelet[2437]: E0213 20:35:48.657087 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:49.960845 systemd[1]: Started sshd@90-10.0.0.6:22-10.0.0.1:59102.service - OpenSSH per-connection server daemon (10.0.0.1:59102). Feb 13 20:35:49.998358 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 59102 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:49.999536 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:50.003410 systemd-logind[1425]: New session 91 of user core. Feb 13 20:35:50.009438 systemd[1]: Started session-91.scope - Session 91 of User core. Feb 13 20:35:50.114406 sshd[4159]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:50.117511 systemd[1]: sshd@90-10.0.0.6:22-10.0.0.1:59102.service: Deactivated successfully. Feb 13 20:35:50.119176 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 20:35:50.119759 systemd-logind[1425]: Session 91 logged out. Waiting for processes to exit. Feb 13 20:35:50.120525 systemd-logind[1425]: Removed session 91. Feb 13 20:35:51.537800 kubelet[2437]: E0213 20:35:51.537751 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:35:51.538572 kubelet[2437]: E0213 20:35:51.538542 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:35:53.658423 kubelet[2437]: E0213 20:35:53.658374 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:35:55.124815 systemd[1]: Started sshd@91-10.0.0.6:22-10.0.0.1:37182.service - OpenSSH per-connection server daemon (10.0.0.1:37182). Feb 13 20:35:55.162599 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 37182 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:35:55.163749 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:35:55.167371 systemd-logind[1425]: New session 92 of user core. Feb 13 20:35:55.176432 systemd[1]: Started session-92.scope - Session 92 of User core. Feb 13 20:35:55.281691 sshd[4173]: pam_unix(sshd:session): session closed for user core Feb 13 20:35:55.284758 systemd[1]: sshd@91-10.0.0.6:22-10.0.0.1:37182.service: Deactivated successfully. Feb 13 20:35:55.286344 systemd[1]: session-92.scope: Deactivated successfully. Feb 13 20:35:55.286924 systemd-logind[1425]: Session 92 logged out. Waiting for processes to exit. Feb 13 20:35:55.287658 systemd-logind[1425]: Removed session 92. Feb 13 20:35:58.659954 kubelet[2437]: E0213 20:35:58.659917 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:00.290830 systemd[1]: Started sshd@92-10.0.0.6:22-10.0.0.1:37186.service - OpenSSH per-connection server daemon (10.0.0.1:37186). Feb 13 20:36:00.328308 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 37186 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:00.329509 sshd[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:00.332847 systemd-logind[1425]: New session 93 of user core. Feb 13 20:36:00.342520 systemd[1]: Started session-93.scope - Session 93 of User core. Feb 13 20:36:00.447516 sshd[4188]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:00.450690 systemd[1]: sshd@92-10.0.0.6:22-10.0.0.1:37186.service: Deactivated successfully. Feb 13 20:36:00.452966 systemd[1]: session-93.scope: Deactivated successfully. Feb 13 20:36:00.453783 systemd-logind[1425]: Session 93 logged out. Waiting for processes to exit. Feb 13 20:36:00.454606 systemd-logind[1425]: Removed session 93. Feb 13 20:36:03.538212 kubelet[2437]: E0213 20:36:03.538167 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:36:03.661402 kubelet[2437]: E0213 20:36:03.661351 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:05.457789 systemd[1]: Started sshd@93-10.0.0.6:22-10.0.0.1:55992.service - OpenSSH per-connection server daemon (10.0.0.1:55992). Feb 13 20:36:05.495205 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 55992 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:05.496473 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:05.500148 systemd-logind[1425]: New session 94 of user core. Feb 13 20:36:05.509441 systemd[1]: Started session-94.scope - Session 94 of User core. Feb 13 20:36:05.538157 kubelet[2437]: E0213 20:36:05.537854 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:36:05.538676 kubelet[2437]: E0213 20:36:05.538634 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:36:05.616129 sshd[4203]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:05.619768 systemd[1]: sshd@93-10.0.0.6:22-10.0.0.1:55992.service: Deactivated successfully. Feb 13 20:36:05.621827 systemd[1]: session-94.scope: Deactivated successfully. Feb 13 20:36:05.622380 systemd-logind[1425]: Session 94 logged out. Waiting for processes to exit. Feb 13 20:36:05.623105 systemd-logind[1425]: Removed session 94. Feb 13 20:36:08.662540 kubelet[2437]: E0213 20:36:08.662500 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:10.632707 systemd[1]: Started sshd@94-10.0.0.6:22-10.0.0.1:55998.service - OpenSSH per-connection server daemon (10.0.0.1:55998). Feb 13 20:36:10.669796 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 55998 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:10.670957 sshd[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:10.674598 systemd-logind[1425]: New session 95 of user core. Feb 13 20:36:10.681497 systemd[1]: Started session-95.scope - Session 95 of User core. Feb 13 20:36:10.785027 sshd[4221]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:10.788253 systemd[1]: sshd@94-10.0.0.6:22-10.0.0.1:55998.service: Deactivated successfully. Feb 13 20:36:10.789852 systemd[1]: session-95.scope: Deactivated successfully. Feb 13 20:36:10.790491 systemd-logind[1425]: Session 95 logged out. Waiting for processes to exit. Feb 13 20:36:10.791619 systemd-logind[1425]: Removed session 95. Feb 13 20:36:12.538503 kubelet[2437]: E0213 20:36:12.538470 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:36:13.663276 kubelet[2437]: E0213 20:36:13.663228 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:15.537597 kubelet[2437]: E0213 20:36:15.537558 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:36:15.795681 systemd[1]: Started sshd@95-10.0.0.6:22-10.0.0.1:36440.service - OpenSSH per-connection server daemon (10.0.0.1:36440). Feb 13 20:36:15.832892 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 36440 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:15.834116 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:15.837360 systemd-logind[1425]: New session 96 of user core. Feb 13 20:36:15.852513 systemd[1]: Started session-96.scope - Session 96 of User core. Feb 13 20:36:15.955889 sshd[4236]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:15.958943 systemd[1]: sshd@95-10.0.0.6:22-10.0.0.1:36440.service: Deactivated successfully. Feb 13 20:36:15.960470 systemd[1]: session-96.scope: Deactivated successfully. Feb 13 20:36:15.960978 systemd-logind[1425]: Session 96 logged out. Waiting for processes to exit. Feb 13 20:36:15.961795 systemd-logind[1425]: Removed session 96. Feb 13 20:36:16.537697 kubelet[2437]: E0213 20:36:16.537655 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:36:16.538343 kubelet[2437]: E0213 20:36:16.538290 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:36:18.663915 kubelet[2437]: E0213 20:36:18.663868 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:20.966755 systemd[1]: Started sshd@96-10.0.0.6:22-10.0.0.1:36456.service - OpenSSH per-connection server daemon (10.0.0.1:36456). Feb 13 20:36:21.004286 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 36456 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:21.005481 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:21.009231 systemd-logind[1425]: New session 97 of user core. Feb 13 20:36:21.015519 systemd[1]: Started session-97.scope - Session 97 of User core. Feb 13 20:36:21.120234 sshd[4250]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:21.122738 systemd[1]: sshd@96-10.0.0.6:22-10.0.0.1:36456.service: Deactivated successfully. Feb 13 20:36:21.124368 systemd[1]: session-97.scope: Deactivated successfully. Feb 13 20:36:21.125549 systemd-logind[1425]: Session 97 logged out. Waiting for processes to exit. Feb 13 20:36:21.126615 systemd-logind[1425]: Removed session 97. Feb 13 20:36:23.538348 kubelet[2437]: E0213 20:36:23.538002 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:36:23.664495 kubelet[2437]: E0213 20:36:23.664457 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:26.130917 systemd[1]: Started sshd@97-10.0.0.6:22-10.0.0.1:44744.service - OpenSSH per-connection server daemon (10.0.0.1:44744). Feb 13 20:36:26.168681 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 44744 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:26.169862 sshd[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:26.173355 systemd-logind[1425]: New session 98 of user core. Feb 13 20:36:26.183456 systemd[1]: Started session-98.scope - Session 98 of User core. Feb 13 20:36:26.288675 sshd[4270]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:26.291239 systemd-logind[1425]: Session 98 logged out. Waiting for processes to exit. Feb 13 20:36:26.291516 systemd[1]: sshd@97-10.0.0.6:22-10.0.0.1:44744.service: Deactivated successfully. Feb 13 20:36:26.293797 systemd[1]: session-98.scope: Deactivated successfully. Feb 13 20:36:26.295987 systemd-logind[1425]: Removed session 98. Feb 13 20:36:28.665308 kubelet[2437]: E0213 20:36:28.665260 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:29.540018 kubelet[2437]: E0213 20:36:29.539991 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:36:29.540677 kubelet[2437]: E0213 20:36:29.540627 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:36:31.298866 systemd[1]: Started sshd@98-10.0.0.6:22-10.0.0.1:44754.service - OpenSSH per-connection server daemon (10.0.0.1:44754). Feb 13 20:36:31.336338 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 44754 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:31.337542 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:31.340661 systemd-logind[1425]: New session 99 of user core. Feb 13 20:36:31.350511 systemd[1]: Started session-99.scope - Session 99 of User core. Feb 13 20:36:31.455712 sshd[4284]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:31.459485 systemd[1]: sshd@98-10.0.0.6:22-10.0.0.1:44754.service: Deactivated successfully. Feb 13 20:36:31.461052 systemd[1]: session-99.scope: Deactivated successfully. Feb 13 20:36:31.462749 systemd-logind[1425]: Session 99 logged out. Waiting for processes to exit. Feb 13 20:36:31.463559 systemd-logind[1425]: Removed session 99. Feb 13 20:36:33.666352 kubelet[2437]: E0213 20:36:33.666270 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:36.466900 systemd[1]: Started sshd@99-10.0.0.6:22-10.0.0.1:44524.service - OpenSSH per-connection server daemon (10.0.0.1:44524). Feb 13 20:36:36.504314 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 44524 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:36.505505 sshd[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:36.509462 systemd-logind[1425]: New session 100 of user core. Feb 13 20:36:36.520496 systemd[1]: Started session-100.scope - Session 100 of User core. Feb 13 20:36:36.625357 sshd[4300]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:36.628474 systemd[1]: sshd@99-10.0.0.6:22-10.0.0.1:44524.service: Deactivated successfully. Feb 13 20:36:36.630533 systemd[1]: session-100.scope: Deactivated successfully. Feb 13 20:36:36.631161 systemd-logind[1425]: Session 100 logged out. Waiting for processes to exit. Feb 13 20:36:36.632105 systemd-logind[1425]: Removed session 100. Feb 13 20:36:38.667323 kubelet[2437]: E0213 20:36:38.667245 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:40.537712 kubelet[2437]: E0213 20:36:40.537676 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:36:40.538633 kubelet[2437]: E0213 20:36:40.538432 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:36:41.635642 systemd[1]: Started sshd@100-10.0.0.6:22-10.0.0.1:44526.service - OpenSSH per-connection server daemon (10.0.0.1:44526). Feb 13 20:36:41.673390 sshd[4316]: Accepted publickey for core from 10.0.0.1 port 44526 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:41.674573 sshd[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:41.677892 systemd-logind[1425]: New session 101 of user core. Feb 13 20:36:41.684430 systemd[1]: Started session-101.scope - Session 101 of User core. Feb 13 20:36:41.790679 sshd[4316]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:41.793651 systemd[1]: sshd@100-10.0.0.6:22-10.0.0.1:44526.service: Deactivated successfully. Feb 13 20:36:41.795846 systemd[1]: session-101.scope: Deactivated successfully. Feb 13 20:36:41.796499 systemd-logind[1425]: Session 101 logged out. Waiting for processes to exit. Feb 13 20:36:41.797411 systemd-logind[1425]: Removed session 101. Feb 13 20:36:43.668338 kubelet[2437]: E0213 20:36:43.668280 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:46.801696 systemd[1]: Started sshd@101-10.0.0.6:22-10.0.0.1:48296.service - OpenSSH per-connection server daemon (10.0.0.1:48296). Feb 13 20:36:46.839811 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 48296 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:46.840964 sshd[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:46.844588 systemd-logind[1425]: New session 102 of user core. Feb 13 20:36:46.852454 systemd[1]: Started session-102.scope - Session 102 of User core. Feb 13 20:36:46.957240 sshd[4330]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:46.959493 systemd[1]: session-102.scope: Deactivated successfully. Feb 13 20:36:46.960686 systemd-logind[1425]: Session 102 logged out. Waiting for processes to exit. Feb 13 20:36:46.960850 systemd[1]: sshd@101-10.0.0.6:22-10.0.0.1:48296.service: Deactivated successfully. Feb 13 20:36:46.963138 systemd-logind[1425]: Removed session 102. Feb 13 20:36:48.669236 kubelet[2437]: E0213 20:36:48.669184 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:51.967863 systemd[1]: Started sshd@102-10.0.0.6:22-10.0.0.1:48298.service - OpenSSH per-connection server daemon (10.0.0.1:48298). Feb 13 20:36:52.005254 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 48298 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:52.006477 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:52.010005 systemd-logind[1425]: New session 103 of user core. Feb 13 20:36:52.021463 systemd[1]: Started session-103.scope - Session 103 of User core. Feb 13 20:36:52.126914 sshd[4345]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:52.130049 systemd[1]: sshd@102-10.0.0.6:22-10.0.0.1:48298.service: Deactivated successfully. Feb 13 20:36:52.131647 systemd[1]: session-103.scope: Deactivated successfully. Feb 13 20:36:52.132961 systemd-logind[1425]: Session 103 logged out. Waiting for processes to exit. Feb 13 20:36:52.134252 systemd-logind[1425]: Removed session 103. Feb 13 20:36:52.537948 kubelet[2437]: E0213 20:36:52.537907 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:36:52.538667 kubelet[2437]: E0213 20:36:52.538622 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:36:53.670147 kubelet[2437]: E0213 20:36:53.670101 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:36:57.141233 systemd[1]: Started sshd@103-10.0.0.6:22-10.0.0.1:50686.service - OpenSSH per-connection server daemon (10.0.0.1:50686). Feb 13 20:36:57.178359 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 50686 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:36:57.179562 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:36:57.183197 systemd-logind[1425]: New session 104 of user core. Feb 13 20:36:57.200503 systemd[1]: Started session-104.scope - Session 104 of User core. Feb 13 20:36:57.305486 sshd[4359]: pam_unix(sshd:session): session closed for user core Feb 13 20:36:57.308574 systemd[1]: sshd@103-10.0.0.6:22-10.0.0.1:50686.service: Deactivated successfully. Feb 13 20:36:57.310351 systemd[1]: session-104.scope: Deactivated successfully. Feb 13 20:36:57.311020 systemd-logind[1425]: Session 104 logged out. Waiting for processes to exit. Feb 13 20:36:57.312274 systemd-logind[1425]: Removed session 104. Feb 13 20:36:58.671845 kubelet[2437]: E0213 20:36:58.671800 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:02.321025 systemd[1]: Started sshd@104-10.0.0.6:22-10.0.0.1:50694.service - OpenSSH per-connection server daemon (10.0.0.1:50694). Feb 13 20:37:02.358186 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 50694 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:02.359364 sshd[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:02.362557 systemd-logind[1425]: New session 105 of user core. Feb 13 20:37:02.379498 systemd[1]: Started session-105.scope - Session 105 of User core. Feb 13 20:37:02.486515 sshd[4375]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:02.489214 systemd-logind[1425]: Session 105 logged out. Waiting for processes to exit. Feb 13 20:37:02.489498 systemd[1]: sshd@104-10.0.0.6:22-10.0.0.1:50694.service: Deactivated successfully. Feb 13 20:37:02.491915 systemd[1]: session-105.scope: Deactivated successfully. Feb 13 20:37:02.493830 systemd-logind[1425]: Removed session 105. Feb 13 20:37:03.672971 kubelet[2437]: E0213 20:37:03.672904 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:06.537961 kubelet[2437]: E0213 20:37:06.537902 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:37:06.538860 kubelet[2437]: E0213 20:37:06.538818 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:37:07.500650 systemd[1]: Started sshd@105-10.0.0.6:22-10.0.0.1:35424.service - OpenSSH per-connection server daemon (10.0.0.1:35424). Feb 13 20:37:07.538750 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 35424 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:07.540799 sshd[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:07.544369 systemd-logind[1425]: New session 106 of user core. Feb 13 20:37:07.558436 systemd[1]: Started session-106.scope - Session 106 of User core. Feb 13 20:37:07.662946 sshd[4390]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:07.666171 systemd[1]: sshd@105-10.0.0.6:22-10.0.0.1:35424.service: Deactivated successfully. Feb 13 20:37:07.668818 systemd[1]: session-106.scope: Deactivated successfully. Feb 13 20:37:07.669928 systemd-logind[1425]: Session 106 logged out. Waiting for processes to exit. Feb 13 20:37:07.670776 systemd-logind[1425]: Removed session 106. Feb 13 20:37:08.674430 kubelet[2437]: E0213 20:37:08.674393 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:12.673894 systemd[1]: Started sshd@106-10.0.0.6:22-10.0.0.1:60006.service - OpenSSH per-connection server daemon (10.0.0.1:60006). Feb 13 20:37:12.711237 sshd[4407]: Accepted publickey for core from 10.0.0.1 port 60006 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:12.712396 sshd[4407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:12.715613 systemd-logind[1425]: New session 107 of user core. Feb 13 20:37:12.726506 systemd[1]: Started session-107.scope - Session 107 of User core. Feb 13 20:37:12.832469 sshd[4407]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:12.835495 systemd[1]: sshd@106-10.0.0.6:22-10.0.0.1:60006.service: Deactivated successfully. Feb 13 20:37:12.837686 systemd[1]: session-107.scope: Deactivated successfully. Feb 13 20:37:12.838219 systemd-logind[1425]: Session 107 logged out. Waiting for processes to exit. Feb 13 20:37:12.839260 systemd-logind[1425]: Removed session 107. Feb 13 20:37:13.675998 kubelet[2437]: E0213 20:37:13.675956 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:17.842772 systemd[1]: Started sshd@107-10.0.0.6:22-10.0.0.1:60008.service - OpenSSH per-connection server daemon (10.0.0.1:60008). Feb 13 20:37:17.880210 sshd[4422]: Accepted publickey for core from 10.0.0.1 port 60008 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:17.881405 sshd[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:17.884755 systemd-logind[1425]: New session 108 of user core. Feb 13 20:37:17.895426 systemd[1]: Started session-108.scope - Session 108 of User core. Feb 13 20:37:17.999423 sshd[4422]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:18.002376 systemd-logind[1425]: Session 108 logged out. Waiting for processes to exit. Feb 13 20:37:18.002631 systemd[1]: sshd@107-10.0.0.6:22-10.0.0.1:60008.service: Deactivated successfully. Feb 13 20:37:18.004042 systemd[1]: session-108.scope: Deactivated successfully. Feb 13 20:37:18.004812 systemd-logind[1425]: Removed session 108. Feb 13 20:37:18.677109 kubelet[2437]: E0213 20:37:18.677052 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:20.537634 kubelet[2437]: E0213 20:37:20.537506 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:37:20.538111 kubelet[2437]: E0213 20:37:20.538064 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:37:23.009804 systemd[1]: Started sshd@108-10.0.0.6:22-10.0.0.1:60066.service - OpenSSH per-connection server daemon (10.0.0.1:60066). Feb 13 20:37:23.047599 sshd[4436]: Accepted publickey for core from 10.0.0.1 port 60066 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:23.048752 sshd[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:23.053245 systemd-logind[1425]: New session 109 of user core. Feb 13 20:37:23.063422 systemd[1]: Started session-109.scope - Session 109 of User core. Feb 13 20:37:23.170433 sshd[4436]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:23.173669 systemd[1]: sshd@108-10.0.0.6:22-10.0.0.1:60066.service: Deactivated successfully. Feb 13 20:37:23.175119 systemd[1]: session-109.scope: Deactivated successfully. Feb 13 20:37:23.176779 systemd-logind[1425]: Session 109 logged out. Waiting for processes to exit. Feb 13 20:37:23.177538 systemd-logind[1425]: Removed session 109. Feb 13 20:37:23.678446 kubelet[2437]: E0213 20:37:23.678408 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:25.538136 kubelet[2437]: E0213 20:37:25.537786 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:37:28.184769 systemd[1]: Started sshd@109-10.0.0.6:22-10.0.0.1:60078.service - OpenSSH per-connection server daemon (10.0.0.1:60078). Feb 13 20:37:28.222024 sshd[4452]: Accepted publickey for core from 10.0.0.1 port 60078 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:28.223224 sshd[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:28.226607 systemd-logind[1425]: New session 110 of user core. Feb 13 20:37:28.241459 systemd[1]: Started session-110.scope - Session 110 of User core. Feb 13 20:37:28.346315 sshd[4452]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:28.349787 systemd[1]: sshd@109-10.0.0.6:22-10.0.0.1:60078.service: Deactivated successfully. Feb 13 20:37:28.351479 systemd[1]: session-110.scope: Deactivated successfully. Feb 13 20:37:28.352706 systemd-logind[1425]: Session 110 logged out. Waiting for processes to exit. Feb 13 20:37:28.353589 systemd-logind[1425]: Removed session 110. Feb 13 20:37:28.679623 kubelet[2437]: E0213 20:37:28.679584 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:31.537929 kubelet[2437]: E0213 20:37:31.537648 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:37:31.538270 kubelet[2437]: E0213 20:37:31.538230 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:37:33.357863 systemd[1]: Started sshd@110-10.0.0.6:22-10.0.0.1:59944.service - OpenSSH per-connection server daemon (10.0.0.1:59944). Feb 13 20:37:33.395694 sshd[4467]: Accepted publickey for core from 10.0.0.1 port 59944 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:33.396903 sshd[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:33.400584 systemd-logind[1425]: New session 111 of user core. Feb 13 20:37:33.410486 systemd[1]: Started session-111.scope - Session 111 of User core. Feb 13 20:37:33.514502 sshd[4467]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:33.517840 systemd[1]: sshd@110-10.0.0.6:22-10.0.0.1:59944.service: Deactivated successfully. Feb 13 20:37:33.520783 systemd[1]: session-111.scope: Deactivated successfully. Feb 13 20:37:33.521367 systemd-logind[1425]: Session 111 logged out. Waiting for processes to exit. Feb 13 20:37:33.522072 systemd-logind[1425]: Removed session 111. Feb 13 20:37:33.680906 kubelet[2437]: E0213 20:37:33.680788 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:35.538196 kubelet[2437]: E0213 20:37:35.538157 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:37:38.524727 systemd[1]: Started sshd@111-10.0.0.6:22-10.0.0.1:59954.service - OpenSSH per-connection server daemon (10.0.0.1:59954). Feb 13 20:37:38.562031 sshd[4484]: Accepted publickey for core from 10.0.0.1 port 59954 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:38.563200 sshd[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:38.566650 systemd-logind[1425]: New session 112 of user core. Feb 13 20:37:38.575460 systemd[1]: Started session-112.scope - Session 112 of User core. Feb 13 20:37:38.679268 sshd[4484]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:38.682090 systemd-logind[1425]: Session 112 logged out. Waiting for processes to exit. Feb 13 20:37:38.682399 systemd[1]: sshd@111-10.0.0.6:22-10.0.0.1:59954.service: Deactivated successfully. Feb 13 20:37:38.682663 kubelet[2437]: E0213 20:37:38.682629 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:38.683846 systemd[1]: session-112.scope: Deactivated successfully. Feb 13 20:37:38.685992 systemd-logind[1425]: Removed session 112. Feb 13 20:37:43.537583 kubelet[2437]: E0213 20:37:43.537543 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:37:43.538563 kubelet[2437]: E0213 20:37:43.538210 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:37:43.684263 kubelet[2437]: E0213 20:37:43.684221 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:43.689646 systemd[1]: Started sshd@112-10.0.0.6:22-10.0.0.1:53194.service - OpenSSH per-connection server daemon (10.0.0.1:53194). Feb 13 20:37:43.727630 sshd[4501]: Accepted publickey for core from 10.0.0.1 port 53194 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:43.728800 sshd[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:43.732361 systemd-logind[1425]: New session 113 of user core. Feb 13 20:37:43.739425 systemd[1]: Started session-113.scope - Session 113 of User core. Feb 13 20:37:43.845904 sshd[4501]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:43.849063 systemd[1]: sshd@112-10.0.0.6:22-10.0.0.1:53194.service: Deactivated successfully. Feb 13 20:37:43.850707 systemd[1]: session-113.scope: Deactivated successfully. Feb 13 20:37:43.852090 systemd-logind[1425]: Session 113 logged out. Waiting for processes to exit. Feb 13 20:37:43.852963 systemd-logind[1425]: Removed session 113. Feb 13 20:37:44.537802 kubelet[2437]: E0213 20:37:44.537764 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:37:48.685409 kubelet[2437]: E0213 20:37:48.685369 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:48.857225 systemd[1]: Started sshd@113-10.0.0.6:22-10.0.0.1:53200.service - OpenSSH per-connection server daemon (10.0.0.1:53200). Feb 13 20:37:48.894736 sshd[4515]: Accepted publickey for core from 10.0.0.1 port 53200 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:48.895890 sshd[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:48.900260 systemd-logind[1425]: New session 114 of user core. Feb 13 20:37:48.910448 systemd[1]: Started session-114.scope - Session 114 of User core. Feb 13 20:37:49.015892 sshd[4515]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:49.019260 systemd[1]: sshd@113-10.0.0.6:22-10.0.0.1:53200.service: Deactivated successfully. Feb 13 20:37:49.021688 systemd[1]: session-114.scope: Deactivated successfully. Feb 13 20:37:49.022449 systemd-logind[1425]: Session 114 logged out. Waiting for processes to exit. Feb 13 20:37:49.023213 systemd-logind[1425]: Removed session 114. Feb 13 20:37:50.538313 kubelet[2437]: E0213 20:37:50.538268 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:37:53.686003 kubelet[2437]: E0213 20:37:53.685956 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:54.029735 systemd[1]: Started sshd@114-10.0.0.6:22-10.0.0.1:33944.service - OpenSSH per-connection server daemon (10.0.0.1:33944). Feb 13 20:37:54.067215 sshd[4529]: Accepted publickey for core from 10.0.0.1 port 33944 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:54.068417 sshd[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:54.072365 systemd-logind[1425]: New session 115 of user core. Feb 13 20:37:54.078453 systemd[1]: Started session-115.scope - Session 115 of User core. Feb 13 20:37:54.182794 sshd[4529]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:54.186106 systemd[1]: sshd@114-10.0.0.6:22-10.0.0.1:33944.service: Deactivated successfully. Feb 13 20:37:54.187715 systemd[1]: session-115.scope: Deactivated successfully. Feb 13 20:37:54.188334 systemd-logind[1425]: Session 115 logged out. Waiting for processes to exit. Feb 13 20:37:54.189172 systemd-logind[1425]: Removed session 115. Feb 13 20:37:57.538822 kubelet[2437]: E0213 20:37:57.538777 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:37:57.539303 kubelet[2437]: E0213 20:37:57.539251 2437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-sgxqm" podUID="7d8d9a1b-1c8a-4252-bae8-b1cf43294240" Feb 13 20:37:58.687542 kubelet[2437]: E0213 20:37:58.687495 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:37:59.193839 systemd[1]: Started sshd@115-10.0.0.6:22-10.0.0.1:33950.service - OpenSSH per-connection server daemon (10.0.0.1:33950). Feb 13 20:37:59.230759 sshd[4544]: Accepted publickey for core from 10.0.0.1 port 33950 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:37:59.231881 sshd[4544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:37:59.235364 systemd-logind[1425]: New session 116 of user core. Feb 13 20:37:59.242450 systemd[1]: Started session-116.scope - Session 116 of User core. Feb 13 20:37:59.348534 sshd[4544]: pam_unix(sshd:session): session closed for user core Feb 13 20:37:59.351527 systemd[1]: sshd@115-10.0.0.6:22-10.0.0.1:33950.service: Deactivated successfully. Feb 13 20:37:59.353071 systemd[1]: session-116.scope: Deactivated successfully. Feb 13 20:37:59.353900 systemd-logind[1425]: Session 116 logged out. Waiting for processes to exit. Feb 13 20:37:59.354882 systemd-logind[1425]: Removed session 116. Feb 13 20:38:03.688637 kubelet[2437]: E0213 20:38:03.688570 2437 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:38:04.358676 systemd[1]: Started sshd@116-10.0.0.6:22-10.0.0.1:34528.service - OpenSSH per-connection server daemon (10.0.0.1:34528). Feb 13 20:38:04.396608 sshd[4559]: Accepted publickey for core from 10.0.0.1 port 34528 ssh2: RSA SHA256:TdhA+b7AFfsR49yUYzSzEma9Q+UWNVnDR75LIS9Grbw Feb 13 20:38:04.397698 sshd[4559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:38:04.401841 systemd-logind[1425]: New session 117 of user core. Feb 13 20:38:04.408446 systemd[1]: Started session-117.scope - Session 117 of User core. Feb 13 20:38:04.512731 sshd[4559]: pam_unix(sshd:session): session closed for user core Feb 13 20:38:04.515899 systemd[1]: sshd@116-10.0.0.6:22-10.0.0.1:34528.service: Deactivated successfully. Feb 13 20:38:04.518058 systemd[1]: session-117.scope: Deactivated successfully. Feb 13 20:38:04.518920 systemd-logind[1425]: Session 117 logged out. Waiting for processes to exit. Feb 13 20:38:04.519724 systemd-logind[1425]: Removed session 117.