Feb 13 20:16:14.981039 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:16:14.981122 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 20:16:14.981133 kernel: KASLR enabled Feb 13 20:16:14.981139 kernel: efi: EFI v2.7 by EDK II Feb 13 20:16:14.981145 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 20:16:14.981150 kernel: random: crng init done Feb 13 20:16:14.981157 kernel: ACPI: Early table checksum verification disabled Feb 13 20:16:14.981163 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 20:16:14.981170 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 20:16:14.981177 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:14.981184 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:14.981189 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:14.981195 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:14.981207 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:14.981225 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:14.981234 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:14.981241 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:14.981247 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:14.981253 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 20:16:14.981260 kernel: NUMA: Failed to initialise from firmware Feb 13 20:16:14.981266 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:16:14.981272 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 20:16:14.981278 kernel: Zone ranges: Feb 13 20:16:14.981285 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:16:14.981292 kernel: DMA32 empty Feb 13 20:16:14.981300 kernel: Normal empty Feb 13 20:16:14.981306 kernel: Movable zone start for each node Feb 13 20:16:14.981312 kernel: Early memory node ranges Feb 13 20:16:14.981319 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 20:16:14.981326 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 20:16:14.981332 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 20:16:14.981339 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 20:16:14.981346 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 20:16:14.981353 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 20:16:14.981359 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 20:16:14.981365 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:16:14.981372 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 20:16:14.981380 kernel: psci: probing for conduit method from ACPI. Feb 13 20:16:14.981386 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:16:14.981393 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:16:14.981402 kernel: psci: Trusted OS migration not required Feb 13 20:16:14.981409 kernel: psci: SMC Calling Convention v1.1 Feb 13 20:16:14.981416 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 20:16:14.981424 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:16:14.981431 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:16:14.981438 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 20:16:14.981444 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:16:14.981451 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:16:14.981458 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:16:14.981465 kernel: CPU features: detected: Spectre-v4 Feb 13 20:16:14.981471 kernel: CPU features: detected: Spectre-BHB Feb 13 20:16:14.981479 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:16:14.981486 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:16:14.981495 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:16:14.981501 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:16:14.981508 kernel: alternatives: applying boot alternatives Feb 13 20:16:14.981516 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:16:14.981523 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:16:14.981530 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:16:14.981537 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:16:14.981544 kernel: Fallback order for Node 0: 0 Feb 13 20:16:14.981550 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 20:16:14.981557 kernel: Policy zone: DMA Feb 13 20:16:14.981564 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:16:14.981572 kernel: software IO TLB: area num 4. Feb 13 20:16:14.981579 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 20:16:14.981587 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Feb 13 20:16:14.981594 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 20:16:14.981601 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:16:14.981608 kernel: rcu: RCU event tracing is enabled. Feb 13 20:16:14.981615 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 20:16:14.981622 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:16:14.981629 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:16:14.981655 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:16:14.981662 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 20:16:14.981669 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:16:14.981679 kernel: GICv3: 256 SPIs implemented Feb 13 20:16:14.981686 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:16:14.981692 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:16:14.981711 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:16:14.981718 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 20:16:14.981725 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 20:16:14.981732 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 20:16:14.981739 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 20:16:14.981746 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 20:16:14.981753 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 20:16:14.981760 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:16:14.981768 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:16:14.981775 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:16:14.981782 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:16:14.981789 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:16:14.981796 kernel: arm-pv: using stolen time PV Feb 13 20:16:14.981803 kernel: Console: colour dummy device 80x25 Feb 13 20:16:14.981810 kernel: ACPI: Core revision 20230628 Feb 13 20:16:14.981817 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:16:14.981824 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:16:14.981831 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:16:14.981839 kernel: landlock: Up and running. Feb 13 20:16:14.981846 kernel: SELinux: Initializing. Feb 13 20:16:14.981853 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:16:14.981860 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:16:14.981867 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:16:14.981874 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:16:14.981881 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:16:14.981888 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:16:14.981895 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 20:16:14.981903 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 20:16:14.981910 kernel: Remapping and enabling EFI services. Feb 13 20:16:14.981917 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:16:14.981924 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:16:14.981931 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 20:16:14.981938 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 20:16:14.981945 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:16:14.981952 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:16:14.981959 kernel: Detected PIPT I-cache on CPU2 Feb 13 20:16:14.981966 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 20:16:14.981975 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 20:16:14.981982 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:16:14.981995 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 20:16:14.982003 kernel: Detected PIPT I-cache on CPU3 Feb 13 20:16:14.982011 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 20:16:14.982018 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 20:16:14.982025 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:16:14.982032 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 20:16:14.982040 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 20:16:14.982089 kernel: SMP: Total of 4 processors activated. Feb 13 20:16:14.982097 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:16:14.982105 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:16:14.982113 kernel: CPU features: detected: Common not Private translations Feb 13 20:16:14.982120 kernel: CPU features: detected: CRC32 instructions Feb 13 20:16:14.982128 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 20:16:14.982135 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:16:14.982142 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:16:14.982152 kernel: CPU features: detected: Privileged Access Never Feb 13 20:16:14.982160 kernel: CPU features: detected: RAS Extension Support Feb 13 20:16:14.982167 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 20:16:14.982174 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:16:14.982182 kernel: alternatives: applying system-wide alternatives Feb 13 20:16:14.982189 kernel: devtmpfs: initialized Feb 13 20:16:14.982196 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:16:14.982208 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 20:16:14.982264 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:16:14.982275 kernel: SMBIOS 3.0.0 present. Feb 13 20:16:14.982282 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 20:16:14.982289 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:16:14.982297 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:16:14.982304 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:16:14.982311 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:16:14.982319 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:16:14.982326 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Feb 13 20:16:14.982334 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:16:14.982342 kernel: cpuidle: using governor menu Feb 13 20:16:14.982350 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:16:14.982357 kernel: ASID allocator initialised with 32768 entries Feb 13 20:16:14.982364 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:16:14.982371 kernel: Serial: AMBA PL011 UART driver Feb 13 20:16:14.982379 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:16:14.982386 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:16:14.982393 kernel: Modules: 509040 pages in range for PLT usage Feb 13 20:16:14.982400 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:16:14.982409 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:16:14.982417 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:16:14.982424 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:16:14.982977 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:16:14.982992 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:16:14.983000 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:16:14.983009 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:16:14.983016 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:16:14.983024 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:16:14.983040 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:16:14.983080 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:16:14.983089 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:16:14.983096 kernel: ACPI: Interpreter enabled Feb 13 20:16:14.983104 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:16:14.983111 kernel: ACPI: MCFG table detected, 1 entries Feb 13 20:16:14.983119 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:16:14.983126 kernel: printk: console [ttyAMA0] enabled Feb 13 20:16:14.983133 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:16:14.983330 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:16:14.983407 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 20:16:14.983474 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 20:16:14.983539 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 20:16:14.983605 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 20:16:14.983615 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 20:16:14.983623 kernel: PCI host bridge to bus 0000:00 Feb 13 20:16:14.983708 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 20:16:14.983780 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 20:16:14.983845 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 20:16:14.983925 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:16:14.984020 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 20:16:14.984204 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:16:14.984319 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 20:16:14.984388 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 20:16:14.984456 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:16:14.984524 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:16:14.984595 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 20:16:14.984668 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 20:16:14.984733 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 20:16:14.984797 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 20:16:14.984859 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 20:16:14.984869 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 20:16:14.984877 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 20:16:14.984885 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 20:16:14.984892 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 20:16:14.984900 kernel: iommu: Default domain type: Translated Feb 13 20:16:14.984908 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:16:14.984915 kernel: efivars: Registered efivars operations Feb 13 20:16:14.984925 kernel: vgaarb: loaded Feb 13 20:16:14.984932 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:16:14.984940 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:16:14.984948 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:16:14.984955 kernel: pnp: PnP ACPI init Feb 13 20:16:14.985031 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 20:16:14.985043 kernel: pnp: PnP ACPI: found 1 devices Feb 13 20:16:14.985107 kernel: NET: Registered PF_INET protocol family Feb 13 20:16:14.985123 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:16:14.985131 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:16:14.985139 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:16:14.985147 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:16:14.985155 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:16:14.985163 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:16:14.985170 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:16:14.985178 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:16:14.985185 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:16:14.985194 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:16:14.985208 kernel: kvm [1]: HYP mode not available Feb 13 20:16:14.985226 kernel: Initialise system trusted keyrings Feb 13 20:16:14.985233 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:16:14.985241 kernel: Key type asymmetric registered Feb 13 20:16:14.985248 kernel: Asymmetric key parser 'x509' registered Feb 13 20:16:14.985256 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:16:14.985263 kernel: io scheduler mq-deadline registered Feb 13 20:16:14.985271 kernel: io scheduler kyber registered Feb 13 20:16:14.985281 kernel: io scheduler bfq registered Feb 13 20:16:14.985289 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 20:16:14.985297 kernel: ACPI: button: Power Button [PWRB] Feb 13 20:16:14.985305 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 20:16:14.985402 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 20:16:14.985414 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:16:14.985422 kernel: thunder_xcv, ver 1.0 Feb 13 20:16:14.985429 kernel: thunder_bgx, ver 1.0 Feb 13 20:16:14.985437 kernel: nicpf, ver 1.0 Feb 13 20:16:14.985447 kernel: nicvf, ver 1.0 Feb 13 20:16:14.985533 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:16:14.985602 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:16:14 UTC (1739477774) Feb 13 20:16:14.985612 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:16:14.985620 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 20:16:14.985629 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:16:14.985636 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:16:14.985645 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:16:14.985671 kernel: Segment Routing with IPv6 Feb 13 20:16:14.985683 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:16:14.985692 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:16:14.985700 kernel: Key type dns_resolver registered Feb 13 20:16:14.985707 kernel: registered taskstats version 1 Feb 13 20:16:14.985715 kernel: Loading compiled-in X.509 certificates Feb 13 20:16:14.985723 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 20:16:14.985731 kernel: Key type .fscrypt registered Feb 13 20:16:14.985739 kernel: Key type fscrypt-provisioning registered Feb 13 20:16:14.985750 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:16:14.985758 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:16:14.985766 kernel: ima: No architecture policies found Feb 13 20:16:14.985773 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:16:14.985781 kernel: clk: Disabling unused clocks Feb 13 20:16:14.985788 kernel: Freeing unused kernel memory: 39360K Feb 13 20:16:14.985796 kernel: Run /init as init process Feb 13 20:16:14.985804 kernel: with arguments: Feb 13 20:16:14.985816 kernel: /init Feb 13 20:16:14.985827 kernel: with environment: Feb 13 20:16:14.985835 kernel: HOME=/ Feb 13 20:16:14.985844 kernel: TERM=linux Feb 13 20:16:14.985851 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:16:14.985861 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:16:14.985872 systemd[1]: Detected virtualization kvm. Feb 13 20:16:14.985880 systemd[1]: Detected architecture arm64. Feb 13 20:16:14.985890 systemd[1]: Running in initrd. Feb 13 20:16:14.985898 systemd[1]: No hostname configured, using default hostname. Feb 13 20:16:14.985956 systemd[1]: Hostname set to . Feb 13 20:16:14.985967 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:16:14.985975 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:16:14.985983 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:16:14.985992 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:16:14.986001 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:16:14.986013 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:16:14.986022 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:16:14.986030 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:16:14.986040 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:16:14.986049 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:16:14.986117 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:16:14.986126 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:16:14.986137 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:16:14.986146 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:16:14.986154 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:16:14.986162 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:16:14.986170 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:16:14.986178 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:16:14.986187 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:16:14.986195 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:16:14.986207 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:16:14.986241 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:16:14.986250 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:16:14.986259 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:16:14.986267 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:16:14.986276 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:16:14.986284 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:16:14.986292 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:16:14.986304 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:16:14.986318 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:16:14.986326 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:14.986335 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:16:14.986344 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:16:14.986352 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:16:14.986361 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:16:14.986372 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:14.986380 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:16:14.986418 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 20:16:14.986441 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:16:14.986450 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:16:14.986458 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:16:14.986468 systemd-journald[237]: Journal started Feb 13 20:16:14.986487 systemd-journald[237]: Runtime Journal (/run/log/journal/db4c5402ecaf4752a71484d2227f7253) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:16:14.971397 systemd-modules-load[238]: Inserted module 'overlay' Feb 13 20:16:14.990238 kernel: Bridge firewalling registered Feb 13 20:16:14.990270 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:16:14.989627 systemd-modules-load[238]: Inserted module 'br_netfilter' Feb 13 20:16:14.992815 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:16:15.001399 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:16:15.003529 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:16:15.005613 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:16:15.009660 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:16:15.010909 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:15.013481 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:16:15.015563 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:16:15.019129 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:16:15.028353 dracut-cmdline[275]: dracut-dracut-053 Feb 13 20:16:15.030781 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:16:15.052680 systemd-resolved[277]: Positive Trust Anchors: Feb 13 20:16:15.052694 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:16:15.052726 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:16:15.057451 systemd-resolved[277]: Defaulting to hostname 'linux'. Feb 13 20:16:15.058711 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:16:15.062469 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:16:15.104243 kernel: SCSI subsystem initialized Feb 13 20:16:15.110321 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:16:15.120250 kernel: iscsi: registered transport (tcp) Feb 13 20:16:15.139582 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:16:15.139651 kernel: QLogic iSCSI HBA Driver Feb 13 20:16:15.188246 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:16:15.202354 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:16:15.223311 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:16:15.223365 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:16:15.224956 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:16:15.273246 kernel: raid6: neonx8 gen() 15700 MB/s Feb 13 20:16:15.289362 kernel: raid6: neonx4 gen() 15581 MB/s Feb 13 20:16:15.306236 kernel: raid6: neonx2 gen() 13113 MB/s Feb 13 20:16:15.323245 kernel: raid6: neonx1 gen() 10431 MB/s Feb 13 20:16:15.340232 kernel: raid6: int64x8 gen() 6952 MB/s Feb 13 20:16:15.357234 kernel: raid6: int64x4 gen() 7322 MB/s Feb 13 20:16:15.374239 kernel: raid6: int64x2 gen() 6102 MB/s Feb 13 20:16:15.391371 kernel: raid6: int64x1 gen() 5034 MB/s Feb 13 20:16:15.391396 kernel: raid6: using algorithm neonx8 gen() 15700 MB/s Feb 13 20:16:15.409309 kernel: raid6: .... xor() 11930 MB/s, rmw enabled Feb 13 20:16:15.409322 kernel: raid6: using neon recovery algorithm Feb 13 20:16:15.415722 kernel: xor: measuring software checksum speed Feb 13 20:16:15.415740 kernel: 8regs : 19788 MB/sec Feb 13 20:16:15.415749 kernel: 32regs : 19650 MB/sec Feb 13 20:16:15.416409 kernel: arm64_neon : 27007 MB/sec Feb 13 20:16:15.416421 kernel: xor: using function: arm64_neon (27007 MB/sec) Feb 13 20:16:15.478015 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:16:15.492248 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:16:15.501433 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:16:15.512799 systemd-udevd[459]: Using default interface naming scheme 'v255'. Feb 13 20:16:15.515933 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:16:15.526524 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:16:15.537550 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Feb 13 20:16:15.562983 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:16:15.579379 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:16:15.618564 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:16:15.627388 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:16:15.643296 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:16:15.645646 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:16:15.649390 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:16:15.651742 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:16:15.659444 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:16:15.664267 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 20:16:15.675457 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 20:16:15.675562 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:16:15.675581 kernel: GPT:9289727 != 19775487 Feb 13 20:16:15.675592 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:16:15.675601 kernel: GPT:9289727 != 19775487 Feb 13 20:16:15.675613 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:16:15.675623 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:16:15.670095 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:16:15.670240 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:15.675834 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:16:15.677408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:16:15.677554 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:15.680002 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:15.691237 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (506) Feb 13 20:16:15.692481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:15.695286 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:16:15.698914 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (522) Feb 13 20:16:15.708253 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:15.721025 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:16:15.726227 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:16:15.731373 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:16:15.735745 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:16:15.736968 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:16:15.748358 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:16:15.750225 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:16:15.757000 disk-uuid[550]: Primary Header is updated. Feb 13 20:16:15.757000 disk-uuid[550]: Secondary Entries is updated. Feb 13 20:16:15.757000 disk-uuid[550]: Secondary Header is updated. Feb 13 20:16:15.765948 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:16:15.771095 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:16.773246 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:16:16.773589 disk-uuid[551]: The operation has completed successfully. Feb 13 20:16:16.807593 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:16:16.807694 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:16:16.835418 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:16:16.838322 sh[574]: Success Feb 13 20:16:16.853251 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:16:16.898885 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:16:16.900177 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:16:16.903942 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:16:16.917274 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 20:16:16.917323 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:16:16.917337 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:16:16.919825 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:16:16.919845 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:16:16.925943 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:16:16.927330 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:16:16.935388 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:16:16.937097 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:16:16.946428 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:16:16.946479 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:16:16.947323 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:16:16.950269 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:16:16.960236 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:16:16.960259 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:16:17.008427 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:16:17.019426 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:16:17.033477 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:16:17.046389 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:16:17.078394 systemd-networkd[757]: lo: Link UP Feb 13 20:16:17.078407 systemd-networkd[757]: lo: Gained carrier Feb 13 20:16:17.079089 systemd-networkd[757]: Enumeration completed Feb 13 20:16:17.080397 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:16:17.080401 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:16:17.080869 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:16:17.081299 systemd-networkd[757]: eth0: Link UP Feb 13 20:16:17.081303 systemd-networkd[757]: eth0: Gained carrier Feb 13 20:16:17.081310 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:16:17.084394 systemd[1]: Reached target network.target - Network. Feb 13 20:16:17.095273 systemd-networkd[757]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:16:17.139037 ignition[730]: Ignition 2.19.0 Feb 13 20:16:17.139048 ignition[730]: Stage: fetch-offline Feb 13 20:16:17.139086 ignition[730]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:17.139095 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:17.139324 ignition[730]: parsed url from cmdline: "" Feb 13 20:16:17.139327 ignition[730]: no config URL provided Feb 13 20:16:17.139331 ignition[730]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:16:17.139339 ignition[730]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:16:17.139362 ignition[730]: op(1): [started] loading QEMU firmware config module Feb 13 20:16:17.139367 ignition[730]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 20:16:17.155222 ignition[730]: op(1): [finished] loading QEMU firmware config module Feb 13 20:16:17.176721 ignition[730]: parsing config with SHA512: bdc73bc814c687191b64b1708908cbca7e94ec5a2ba4550a491db5c3516f4dcc835eb88f648dea7e55e839a4b9ac9a5c0932a35733f8ebdbde27d2391d5acbbd Feb 13 20:16:17.182297 unknown[730]: fetched base config from "system" Feb 13 20:16:17.182311 unknown[730]: fetched user config from "qemu" Feb 13 20:16:17.183477 ignition[730]: fetch-offline: fetch-offline passed Feb 13 20:16:17.185863 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:16:17.183563 ignition[730]: Ignition finished successfully Feb 13 20:16:17.187289 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:16:17.193380 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:16:17.203872 ignition[770]: Ignition 2.19.0 Feb 13 20:16:17.203881 ignition[770]: Stage: kargs Feb 13 20:16:17.204039 ignition[770]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:17.204054 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:17.206670 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:16:17.204969 ignition[770]: kargs: kargs passed Feb 13 20:16:17.205015 ignition[770]: Ignition finished successfully Feb 13 20:16:17.218407 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:16:17.227517 ignition[778]: Ignition 2.19.0 Feb 13 20:16:17.227528 ignition[778]: Stage: disks Feb 13 20:16:17.227707 ignition[778]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:17.230485 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:16:17.227716 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:17.231831 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:16:17.228584 ignition[778]: disks: disks passed Feb 13 20:16:17.233549 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:16:17.228634 ignition[778]: Ignition finished successfully Feb 13 20:16:17.235681 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:16:17.237625 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:16:17.239124 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:16:17.254384 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:16:17.268674 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:16:17.275768 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:16:17.285342 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:16:17.329245 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 20:16:17.330034 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:16:17.331340 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:16:17.344322 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:16:17.346022 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:16:17.347177 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:16:17.347239 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:16:17.354307 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (797) Feb 13 20:16:17.347262 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:16:17.358798 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:16:17.358818 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:16:17.358828 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:16:17.352000 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:16:17.360884 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:16:17.357905 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:16:17.364260 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:16:17.404623 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:16:17.409166 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:16:17.413305 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:16:17.417452 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:16:17.501025 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:16:17.518373 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:16:17.521089 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:16:17.526229 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:16:17.545630 ignition[910]: INFO : Ignition 2.19.0 Feb 13 20:16:17.545630 ignition[910]: INFO : Stage: mount Feb 13 20:16:17.545630 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:17.545630 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:17.545630 ignition[910]: INFO : mount: mount passed Feb 13 20:16:17.545630 ignition[910]: INFO : Ignition finished successfully Feb 13 20:16:17.545245 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:16:17.547788 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:16:17.559329 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:16:17.914935 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:16:17.928472 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:16:17.934231 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (924) Feb 13 20:16:17.936411 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:16:17.936433 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:16:17.936444 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:16:17.939230 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:16:17.940426 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:16:17.965049 ignition[941]: INFO : Ignition 2.19.0 Feb 13 20:16:17.965049 ignition[941]: INFO : Stage: files Feb 13 20:16:17.966661 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:17.966661 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:17.966661 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:16:17.969991 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:16:17.969991 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:16:17.969991 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:16:17.969991 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:16:17.969991 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:16:17.969145 unknown[941]: wrote ssh authorized keys file for user: core Feb 13 20:16:17.977268 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:16:17.977268 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:16:17.977268 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:16:17.977268 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 20:16:18.043015 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:16:18.330099 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:16:18.330099 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:16:18.335024 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:16:18.335024 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:16:18.335024 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:16:18.335024 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:16:18.335024 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:16:18.335024 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:16:18.335024 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:16:18.335024 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:16:18.335024 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:16:18.335024 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:16:18.335024 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:16:18.335024 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:16:18.335024 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 20:16:18.472457 systemd-networkd[757]: eth0: Gained IPv6LL Feb 13 20:16:18.689606 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:16:18.892699 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:16:18.892699 ignition[941]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 20:16:18.896267 ignition[941]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:16:18.896267 ignition[941]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:16:18.896267 ignition[941]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 20:16:18.896267 ignition[941]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 20:16:18.896267 ignition[941]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:16:18.896267 ignition[941]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:16:18.896267 ignition[941]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 20:16:18.896267 ignition[941]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Feb 13 20:16:18.896267 ignition[941]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:16:18.896267 ignition[941]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:16:18.896267 ignition[941]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Feb 13 20:16:18.896267 ignition[941]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 20:16:18.918041 ignition[941]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:16:18.921661 ignition[941]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:16:18.924294 ignition[941]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 20:16:18.924294 ignition[941]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:16:18.924294 ignition[941]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:16:18.924294 ignition[941]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:16:18.924294 ignition[941]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:16:18.924294 ignition[941]: INFO : files: files passed Feb 13 20:16:18.924294 ignition[941]: INFO : Ignition finished successfully Feb 13 20:16:18.925102 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:16:18.936362 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:16:18.938687 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:16:18.940222 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:16:18.940308 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:16:18.946454 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 20:16:18.948627 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:18.948627 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:18.951693 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:18.950559 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:16:18.952972 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:16:18.962350 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:16:18.983616 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:16:18.983724 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:16:18.985935 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:16:18.987788 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:16:18.989687 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:16:18.990434 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:16:19.005856 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:16:19.020440 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:16:19.028418 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:16:19.029721 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:16:19.031900 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:16:19.033660 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:16:19.033786 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:16:19.036233 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:16:19.038287 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:16:19.039954 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:16:19.041765 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:16:19.043810 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:16:19.045869 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:16:19.047796 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:16:19.049822 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:16:19.051872 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:16:19.053698 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:16:19.055288 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:16:19.055415 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:16:19.057844 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:16:19.059904 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:16:19.061930 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:16:19.066269 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:16:19.067579 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:16:19.067700 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:16:19.070687 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:16:19.070821 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:16:19.072915 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:16:19.074547 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:16:19.075371 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:16:19.076813 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:16:19.078381 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:16:19.080180 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:16:19.080301 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:16:19.082515 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:16:19.082605 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:16:19.084231 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:16:19.084357 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:16:19.086167 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:16:19.086301 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:16:19.099404 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:16:19.101034 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:16:19.102012 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:16:19.102148 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:16:19.104174 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:16:19.104310 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:16:19.110679 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:16:19.111908 ignition[996]: INFO : Ignition 2.19.0 Feb 13 20:16:19.111908 ignition[996]: INFO : Stage: umount Feb 13 20:16:19.111908 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:19.111908 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:19.119198 ignition[996]: INFO : umount: umount passed Feb 13 20:16:19.119198 ignition[996]: INFO : Ignition finished successfully Feb 13 20:16:19.112248 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:16:19.115166 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:16:19.115287 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:16:19.117524 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:16:19.118694 systemd[1]: Stopped target network.target - Network. Feb 13 20:16:19.122075 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:16:19.122151 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:16:19.124137 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:16:19.124199 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:16:19.126197 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:16:19.126263 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:16:19.128133 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:16:19.128198 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:16:19.130259 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:16:19.132007 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:16:19.136615 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:16:19.136724 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:16:19.137264 systemd-networkd[757]: eth0: DHCPv6 lease lost Feb 13 20:16:19.139110 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:16:19.139750 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:16:19.141630 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:16:19.141689 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:16:19.149417 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:16:19.150919 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:16:19.150990 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:16:19.153072 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:16:19.153123 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:16:19.154958 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:16:19.155009 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:16:19.156980 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:16:19.157028 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:16:19.159103 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:16:19.168773 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:16:19.168890 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:16:19.175825 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:16:19.175966 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:16:19.178283 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:16:19.178322 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:16:19.180389 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:16:19.180421 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:16:19.182253 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:16:19.182302 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:16:19.185017 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:16:19.185067 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:16:19.187848 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:16:19.187897 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:19.202394 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:16:19.203503 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:16:19.203574 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:16:19.205767 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:16:19.205824 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:16:19.207896 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:16:19.207944 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:16:19.210098 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:16:19.210148 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:19.212514 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:16:19.214239 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:16:19.215688 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:16:19.215770 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:16:19.218469 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:16:19.219575 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:16:19.219637 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:16:19.222393 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:16:19.232942 systemd[1]: Switching root. Feb 13 20:16:19.270328 systemd-journald[237]: Journal stopped Feb 13 20:16:20.042368 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 20:16:20.042428 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:16:20.042441 kernel: SELinux: policy capability open_perms=1 Feb 13 20:16:20.042451 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:16:20.042464 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:16:20.042473 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:16:20.042483 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:16:20.042493 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:16:20.042502 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:16:20.042512 kernel: audit: type=1403 audit(1739477779.496:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:16:20.042523 systemd[1]: Successfully loaded SELinux policy in 31.135ms. Feb 13 20:16:20.042544 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.233ms. Feb 13 20:16:20.042556 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:16:20.042570 systemd[1]: Detected virtualization kvm. Feb 13 20:16:20.042580 systemd[1]: Detected architecture arm64. Feb 13 20:16:20.042591 systemd[1]: Detected first boot. Feb 13 20:16:20.042601 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:16:20.042612 zram_generator::config[1061]: No configuration found. Feb 13 20:16:20.042622 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:16:20.042633 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:16:20.042645 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:16:20.042658 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:16:20.042669 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:16:20.042679 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:16:20.042690 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:16:20.042701 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:16:20.042712 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:16:20.042722 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:16:20.042733 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:16:20.042746 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:16:20.042756 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:16:20.042767 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:16:20.042778 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:16:20.042788 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:16:20.042799 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:16:20.042810 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 20:16:20.042820 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:16:20.042831 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:16:20.042843 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:16:20.042854 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:16:20.042864 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:16:20.042875 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:16:20.042886 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:16:20.042897 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:16:20.042907 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:16:20.042917 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:16:20.042928 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:16:20.042940 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:16:20.042951 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:16:20.042962 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:16:20.042972 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:16:20.042982 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:16:20.042993 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:16:20.043003 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:16:20.043014 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:16:20.043024 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:16:20.043036 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:16:20.043047 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:20.043058 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:16:20.043069 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:16:20.043079 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:16:20.043089 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:16:20.043100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:16:20.043111 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:16:20.043123 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:16:20.043134 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:16:20.043145 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 20:16:20.043156 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 20:16:20.043166 kernel: ACPI: bus type drm_connector registered Feb 13 20:16:20.043176 kernel: fuse: init (API version 7.39) Feb 13 20:16:20.043195 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:16:20.043208 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:16:20.043226 kernel: loop: module loaded Feb 13 20:16:20.043239 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:16:20.043250 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:16:20.043260 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:16:20.043271 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:16:20.043281 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:16:20.043315 systemd-journald[1147]: Collecting audit messages is disabled. Feb 13 20:16:20.043336 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:16:20.043349 systemd-journald[1147]: Journal started Feb 13 20:16:20.043370 systemd-journald[1147]: Runtime Journal (/run/log/journal/db4c5402ecaf4752a71484d2227f7253) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:16:20.046299 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:16:20.047287 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:16:20.048564 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:16:20.049797 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:16:20.051068 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:16:20.052547 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:16:20.054060 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:16:20.054240 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:16:20.055695 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:16:20.055853 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:16:20.057228 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:16:20.057383 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:16:20.058693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:16:20.058850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:16:20.060316 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:16:20.060469 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:16:20.062006 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:16:20.062248 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:16:20.063622 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:16:20.065233 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:16:20.066916 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:16:20.078566 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:16:20.089334 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:16:20.091478 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:16:20.092685 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:16:20.094489 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:16:20.096751 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:16:20.097966 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:16:20.101400 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:16:20.102646 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:16:20.103705 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:16:20.104735 systemd-journald[1147]: Time spent on flushing to /var/log/journal/db4c5402ecaf4752a71484d2227f7253 is 20.831ms for 844 entries. Feb 13 20:16:20.104735 systemd-journald[1147]: System Journal (/var/log/journal/db4c5402ecaf4752a71484d2227f7253) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:16:20.129813 systemd-journald[1147]: Received client request to flush runtime journal. Feb 13 20:16:20.108569 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:16:20.113756 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:16:20.115163 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:16:20.117166 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:16:20.124392 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:16:20.125850 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:16:20.132106 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:16:20.134368 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:16:20.137142 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:16:20.141670 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:16:20.142982 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Feb 13 20:16:20.142999 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Feb 13 20:16:20.147163 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:16:20.158504 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:16:20.180722 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:16:20.193488 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:16:20.205028 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Feb 13 20:16:20.205345 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Feb 13 20:16:20.209076 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:16:20.536551 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:16:20.548470 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:16:20.567654 systemd-udevd[1220]: Using default interface naming scheme 'v255'. Feb 13 20:16:20.581497 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:16:20.597495 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:16:20.601806 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:16:20.613161 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Feb 13 20:16:20.629248 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1227) Feb 13 20:16:20.663722 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:16:20.674534 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:16:20.712446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:20.720693 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:16:20.723577 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:16:20.724590 systemd-networkd[1224]: lo: Link UP Feb 13 20:16:20.724839 systemd-networkd[1224]: lo: Gained carrier Feb 13 20:16:20.725590 systemd-networkd[1224]: Enumeration completed Feb 13 20:16:20.725758 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:16:20.726151 systemd-networkd[1224]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:16:20.726155 systemd-networkd[1224]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:16:20.726905 systemd-networkd[1224]: eth0: Link UP Feb 13 20:16:20.726908 systemd-networkd[1224]: eth0: Gained carrier Feb 13 20:16:20.726919 systemd-networkd[1224]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:16:20.729736 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:16:20.739801 lvm[1257]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:16:20.750157 systemd-networkd[1224]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:16:20.758131 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:20.764727 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:16:20.766326 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:16:20.778382 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:16:20.781565 lvm[1266]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:16:20.815587 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:16:20.817075 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:16:20.818331 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:16:20.818366 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:16:20.819369 systemd[1]: Reached target machines.target - Containers. Feb 13 20:16:20.821276 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:16:20.833390 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:16:20.835635 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:16:20.836808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:20.837754 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:16:20.840395 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:16:20.844373 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:16:20.849624 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:16:20.857276 kernel: loop0: detected capacity change from 0 to 114328 Feb 13 20:16:20.863136 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:16:20.865727 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:16:20.867699 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:16:20.870384 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:16:20.896242 kernel: loop1: detected capacity change from 0 to 194096 Feb 13 20:16:20.931248 kernel: loop2: detected capacity change from 0 to 114432 Feb 13 20:16:20.972262 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 20:16:20.977550 kernel: loop4: detected capacity change from 0 to 194096 Feb 13 20:16:20.984226 kernel: loop5: detected capacity change from 0 to 114432 Feb 13 20:16:20.987333 (sd-merge)[1287]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 20:16:20.987706 (sd-merge)[1287]: Merged extensions into '/usr'. Feb 13 20:16:20.992173 systemd[1]: Reloading requested from client PID 1274 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:16:20.992194 systemd[1]: Reloading... Feb 13 20:16:21.033410 zram_generator::config[1313]: No configuration found. Feb 13 20:16:21.080657 ldconfig[1271]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:16:21.126691 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:16:21.169922 systemd[1]: Reloading finished in 177 ms. Feb 13 20:16:21.184961 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:16:21.186493 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:16:21.202367 systemd[1]: Starting ensure-sysext.service... Feb 13 20:16:21.204305 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:16:21.209421 systemd[1]: Reloading requested from client PID 1356 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:16:21.209436 systemd[1]: Reloading... Feb 13 20:16:21.221095 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:16:21.221397 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:16:21.222025 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:16:21.222299 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Feb 13 20:16:21.222354 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Feb 13 20:16:21.224644 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:16:21.224659 systemd-tmpfiles[1357]: Skipping /boot Feb 13 20:16:21.231711 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:16:21.231729 systemd-tmpfiles[1357]: Skipping /boot Feb 13 20:16:21.259249 zram_generator::config[1390]: No configuration found. Feb 13 20:16:21.342237 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:16:21.384924 systemd[1]: Reloading finished in 175 ms. Feb 13 20:16:21.399575 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:16:21.417789 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:16:21.420157 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:16:21.422343 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:16:21.425342 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:16:21.430317 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:16:21.436539 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:21.439028 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:16:21.449442 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:16:21.451580 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:16:21.452802 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:21.453783 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:16:21.455498 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:16:21.455636 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:16:21.459654 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:16:21.459794 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:16:21.461560 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:16:21.461746 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:16:21.467381 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:21.472477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:16:21.478399 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:16:21.481443 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:16:21.484358 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:21.486411 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:16:21.489372 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:16:21.490968 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:16:21.491100 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:16:21.491760 augenrules[1467]: No rules Feb 13 20:16:21.492967 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:16:21.494573 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:16:21.494712 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:16:21.496436 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:16:21.496613 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:16:21.502306 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:16:21.504221 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:16:21.509109 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:21.517348 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:16:21.519313 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:16:21.520715 systemd-resolved[1432]: Positive Trust Anchors: Feb 13 20:16:21.520731 systemd-resolved[1432]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:16:21.520762 systemd-resolved[1432]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:16:21.522367 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:16:21.526378 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:16:21.527559 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:21.527615 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:16:21.528119 systemd[1]: Finished ensure-sysext.service. Feb 13 20:16:21.528247 systemd-resolved[1432]: Defaulting to hostname 'linux'. Feb 13 20:16:21.530114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:16:21.530452 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:16:21.532154 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:16:21.532314 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:16:21.533592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:16:21.533736 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:16:21.535147 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:16:21.536559 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:16:21.536750 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:16:21.541935 systemd[1]: Reached target network.target - Network. Feb 13 20:16:21.542870 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:16:21.544053 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:16:21.544126 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:16:21.557430 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:16:21.599868 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:16:21.600580 systemd-timesyncd[1500]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 20:16:21.600626 systemd-timesyncd[1500]: Initial clock synchronization to Thu 2025-02-13 20:16:21.634834 UTC. Feb 13 20:16:21.601456 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:16:21.602562 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:16:21.603770 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:16:21.604993 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:16:21.606244 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:16:21.606279 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:16:21.607139 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:16:21.608305 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:16:21.609421 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:16:21.610611 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:16:21.612128 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:16:21.614473 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:16:21.616689 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:16:21.622189 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:16:21.623252 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:16:21.624188 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:16:21.625242 systemd[1]: System is tainted: cgroupsv1 Feb 13 20:16:21.625289 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:16:21.625308 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:16:21.626342 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:16:21.628302 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:16:21.630114 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:16:21.634374 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:16:21.635370 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:16:21.636356 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:16:21.643289 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:16:21.644319 jq[1506]: false Feb 13 20:16:21.650382 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:16:21.652512 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:16:21.658030 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:16:21.663844 extend-filesystems[1508]: Found loop3 Feb 13 20:16:21.663844 extend-filesystems[1508]: Found loop4 Feb 13 20:16:21.663844 extend-filesystems[1508]: Found loop5 Feb 13 20:16:21.663844 extend-filesystems[1508]: Found vda Feb 13 20:16:21.663844 extend-filesystems[1508]: Found vda1 Feb 13 20:16:21.663844 extend-filesystems[1508]: Found vda2 Feb 13 20:16:21.663844 extend-filesystems[1508]: Found vda3 Feb 13 20:16:21.663844 extend-filesystems[1508]: Found usr Feb 13 20:16:21.663844 extend-filesystems[1508]: Found vda4 Feb 13 20:16:21.663844 extend-filesystems[1508]: Found vda6 Feb 13 20:16:21.663844 extend-filesystems[1508]: Found vda7 Feb 13 20:16:21.663844 extend-filesystems[1508]: Found vda9 Feb 13 20:16:21.663844 extend-filesystems[1508]: Checking size of /dev/vda9 Feb 13 20:16:21.694351 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 20:16:21.662603 dbus-daemon[1505]: [system] SELinux support is enabled Feb 13 20:16:21.665340 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:16:21.694672 extend-filesystems[1508]: Resized partition /dev/vda9 Feb 13 20:16:21.667415 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:16:21.696252 extend-filesystems[1534]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:16:21.672332 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:16:21.697341 jq[1528]: true Feb 13 20:16:21.677409 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:16:21.682443 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:16:21.682659 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:16:21.682882 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:16:21.683070 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:16:21.686630 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:16:21.686835 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:16:21.706374 (ntainerd)[1539]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:16:21.707088 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:16:21.707121 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:16:21.709748 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:16:21.709776 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:16:21.716656 jq[1538]: true Feb 13 20:16:21.717266 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1228) Feb 13 20:16:21.727663 update_engine[1527]: I20250213 20:16:21.727066 1527 main.cc:92] Flatcar Update Engine starting Feb 13 20:16:21.733281 update_engine[1527]: I20250213 20:16:21.733232 1527 update_check_scheduler.cc:74] Next update check in 5m34s Feb 13 20:16:21.733384 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:16:21.734928 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:16:21.738362 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 20:16:21.739221 tar[1536]: linux-arm64/helm Feb 13 20:16:21.742951 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:16:21.753637 systemd-logind[1521]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 20:16:21.754012 extend-filesystems[1534]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:16:21.754012 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:16:21.754012 extend-filesystems[1534]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 20:16:21.767309 extend-filesystems[1508]: Resized filesystem in /dev/vda9 Feb 13 20:16:21.755651 systemd-logind[1521]: New seat seat0. Feb 13 20:16:21.757095 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:16:21.757390 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:16:21.767623 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:16:21.807538 bash[1568]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:16:21.807541 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:16:21.810003 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:16:21.815606 locksmithd[1556]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:16:21.925363 containerd[1539]: time="2025-02-13T20:16:21.925228160Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:16:21.949879 containerd[1539]: time="2025-02-13T20:16:21.949840680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:21.951263 containerd[1539]: time="2025-02-13T20:16:21.951227880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:21.951685 containerd[1539]: time="2025-02-13T20:16:21.951381960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:16:21.951685 containerd[1539]: time="2025-02-13T20:16:21.951415880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:16:21.951685 containerd[1539]: time="2025-02-13T20:16:21.951549720Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:16:21.951685 containerd[1539]: time="2025-02-13T20:16:21.951566960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:21.951685 containerd[1539]: time="2025-02-13T20:16:21.951615920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:21.951685 containerd[1539]: time="2025-02-13T20:16:21.951629800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:21.952134 containerd[1539]: time="2025-02-13T20:16:21.952111880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:21.952610 containerd[1539]: time="2025-02-13T20:16:21.952259920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:21.952610 containerd[1539]: time="2025-02-13T20:16:21.952285520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:21.952610 containerd[1539]: time="2025-02-13T20:16:21.952295800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:21.952610 containerd[1539]: time="2025-02-13T20:16:21.952390520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:21.952610 containerd[1539]: time="2025-02-13T20:16:21.952580760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:21.952981 containerd[1539]: time="2025-02-13T20:16:21.952951360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:21.953090 containerd[1539]: time="2025-02-13T20:16:21.953073640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:16:21.953295 containerd[1539]: time="2025-02-13T20:16:21.953277240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:16:21.953451 containerd[1539]: time="2025-02-13T20:16:21.953435000Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:16:21.957407 containerd[1539]: time="2025-02-13T20:16:21.957005040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:16:21.957407 containerd[1539]: time="2025-02-13T20:16:21.957043840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:16:21.957407 containerd[1539]: time="2025-02-13T20:16:21.957064080Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:16:21.957407 containerd[1539]: time="2025-02-13T20:16:21.957083240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:16:21.957407 containerd[1539]: time="2025-02-13T20:16:21.957100760Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:16:21.957407 containerd[1539]: time="2025-02-13T20:16:21.957248920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:16:21.963608 containerd[1539]: time="2025-02-13T20:16:21.963577440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:16:21.964584 containerd[1539]: time="2025-02-13T20:16:21.964554640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:16:21.964735 containerd[1539]: time="2025-02-13T20:16:21.964712920Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:16:21.964794 containerd[1539]: time="2025-02-13T20:16:21.964780360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:16:21.965386 containerd[1539]: time="2025-02-13T20:16:21.964872480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:16:21.965386 containerd[1539]: time="2025-02-13T20:16:21.964896440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:16:21.965386 containerd[1539]: time="2025-02-13T20:16:21.964926600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:16:21.965386 containerd[1539]: time="2025-02-13T20:16:21.964949160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:16:21.965386 containerd[1539]: time="2025-02-13T20:16:21.964964160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:16:21.965386 containerd[1539]: time="2025-02-13T20:16:21.964980880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:16:21.965386 containerd[1539]: time="2025-02-13T20:16:21.964996600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:16:21.965386 containerd[1539]: time="2025-02-13T20:16:21.965023160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:16:21.965386 containerd[1539]: time="2025-02-13T20:16:21.965048400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:16:21.965386 containerd[1539]: time="2025-02-13T20:16:21.965071840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:16:21.965386 containerd[1539]: time="2025-02-13T20:16:21.965088680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:16:21.965386 containerd[1539]: time="2025-02-13T20:16:21.965104280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:16:21.965386 containerd[1539]: time="2025-02-13T20:16:21.965116960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:16:21.965386 containerd[1539]: time="2025-02-13T20:16:21.965132680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:16:21.965855 containerd[1539]: time="2025-02-13T20:16:21.965147240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:16:21.965855 containerd[1539]: time="2025-02-13T20:16:21.965163960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:16:21.965855 containerd[1539]: time="2025-02-13T20:16:21.965192160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:16:21.965855 containerd[1539]: time="2025-02-13T20:16:21.965226880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:16:21.965855 containerd[1539]: time="2025-02-13T20:16:21.965244160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:16:21.965855 containerd[1539]: time="2025-02-13T20:16:21.965256440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:16:21.965855 containerd[1539]: time="2025-02-13T20:16:21.965272920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:16:21.965855 containerd[1539]: time="2025-02-13T20:16:21.965293760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:16:21.965855 containerd[1539]: time="2025-02-13T20:16:21.965319240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:16:21.965855 containerd[1539]: time="2025-02-13T20:16:21.965335160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:16:21.965855 containerd[1539]: time="2025-02-13T20:16:21.965349440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:16:21.965855 containerd[1539]: time="2025-02-13T20:16:21.965723120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:16:21.965855 containerd[1539]: time="2025-02-13T20:16:21.965763160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:16:21.965855 containerd[1539]: time="2025-02-13T20:16:21.965779160Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:16:21.966087 containerd[1539]: time="2025-02-13T20:16:21.965797320Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:16:21.966087 containerd[1539]: time="2025-02-13T20:16:21.965810840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:16:21.966087 containerd[1539]: time="2025-02-13T20:16:21.965825600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:16:21.966087 containerd[1539]: time="2025-02-13T20:16:21.965839000Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:16:21.966087 containerd[1539]: time="2025-02-13T20:16:21.965854120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:16:21.967246 containerd[1539]: time="2025-02-13T20:16:21.966431400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:16:21.967246 containerd[1539]: time="2025-02-13T20:16:21.966551800Z" level=info msg="Connect containerd service" Feb 13 20:16:21.967246 containerd[1539]: time="2025-02-13T20:16:21.966588080Z" level=info msg="using legacy CRI server" Feb 13 20:16:21.967246 containerd[1539]: time="2025-02-13T20:16:21.966595440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:16:21.967246 containerd[1539]: time="2025-02-13T20:16:21.966669720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:16:21.967500 containerd[1539]: time="2025-02-13T20:16:21.967441120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:16:21.967666 containerd[1539]: time="2025-02-13T20:16:21.967628880Z" level=info msg="Start subscribing containerd event" Feb 13 20:16:21.967709 containerd[1539]: time="2025-02-13T20:16:21.967681800Z" level=info msg="Start recovering state" Feb 13 20:16:21.967762 containerd[1539]: time="2025-02-13T20:16:21.967748080Z" level=info msg="Start event monitor" Feb 13 20:16:21.967797 containerd[1539]: time="2025-02-13T20:16:21.967763760Z" level=info msg="Start snapshots syncer" Feb 13 20:16:21.967797 containerd[1539]: time="2025-02-13T20:16:21.967774160Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:16:21.967797 containerd[1539]: time="2025-02-13T20:16:21.967781800Z" level=info msg="Start streaming server" Feb 13 20:16:21.968097 containerd[1539]: time="2025-02-13T20:16:21.968078200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:16:21.968258 containerd[1539]: time="2025-02-13T20:16:21.968130520Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:16:21.968420 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:16:21.969589 containerd[1539]: time="2025-02-13T20:16:21.968305480Z" level=info msg="containerd successfully booted in 0.046842s" Feb 13 20:16:22.056378 systemd-networkd[1224]: eth0: Gained IPv6LL Feb 13 20:16:22.059998 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:16:22.063444 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:16:22.071965 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 20:16:22.074793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:22.077583 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:16:22.097177 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 20:16:22.097462 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 20:16:22.100206 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:16:22.117643 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:16:22.119322 tar[1536]: linux-arm64/LICENSE Feb 13 20:16:22.119383 tar[1536]: linux-arm64/README.md Feb 13 20:16:22.133964 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:16:22.561146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:22.564879 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:16:22.791666 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:16:22.809802 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:16:22.818514 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:16:22.823049 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:16:22.823274 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:16:22.826179 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:16:22.839362 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:16:22.852560 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:16:22.854659 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 20:16:22.856048 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:16:22.857159 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:16:22.858388 systemd[1]: Startup finished in 5.371s (kernel) + 3.397s (userspace) = 8.769s. Feb 13 20:16:23.035782 kubelet[1624]: E0213 20:16:23.035727 1624 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:16:23.038379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:16:23.038567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:16:28.028586 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:16:28.040447 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:39650.service - OpenSSH per-connection server daemon (10.0.0.1:39650). Feb 13 20:16:28.096324 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 39650 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:16:28.098922 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:28.110797 systemd-logind[1521]: New session 1 of user core. Feb 13 20:16:28.111659 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:16:28.129545 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:16:28.140494 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:16:28.142523 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:16:28.148868 (systemd)[1664]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:16:28.229062 systemd[1664]: Queued start job for default target default.target. Feb 13 20:16:28.229394 systemd[1664]: Created slice app.slice - User Application Slice. Feb 13 20:16:28.229416 systemd[1664]: Reached target paths.target - Paths. Feb 13 20:16:28.229427 systemd[1664]: Reached target timers.target - Timers. Feb 13 20:16:28.237292 systemd[1664]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:16:28.243361 systemd[1664]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:16:28.243419 systemd[1664]: Reached target sockets.target - Sockets. Feb 13 20:16:28.243430 systemd[1664]: Reached target basic.target - Basic System. Feb 13 20:16:28.243469 systemd[1664]: Reached target default.target - Main User Target. Feb 13 20:16:28.243493 systemd[1664]: Startup finished in 89ms. Feb 13 20:16:28.243612 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:16:28.244791 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:16:28.303557 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:39660.service - OpenSSH per-connection server daemon (10.0.0.1:39660). Feb 13 20:16:28.335618 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 39660 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:16:28.337309 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:28.341511 systemd-logind[1521]: New session 2 of user core. Feb 13 20:16:28.352438 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:16:28.404145 sshd[1676]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:28.423555 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:39662.service - OpenSSH per-connection server daemon (10.0.0.1:39662). Feb 13 20:16:28.423952 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:39660.service: Deactivated successfully. Feb 13 20:16:28.425628 systemd-logind[1521]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:16:28.426229 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:16:28.427349 systemd-logind[1521]: Removed session 2. Feb 13 20:16:28.455425 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 39662 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:16:28.456668 sshd[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:28.460426 systemd-logind[1521]: New session 3 of user core. Feb 13 20:16:28.468449 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:16:28.515824 sshd[1681]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:28.530479 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:39666.service - OpenSSH per-connection server daemon (10.0.0.1:39666). Feb 13 20:16:28.530854 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:39662.service: Deactivated successfully. Feb 13 20:16:28.533282 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:16:28.533824 systemd-logind[1521]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:16:28.534780 systemd-logind[1521]: Removed session 3. Feb 13 20:16:28.562368 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 39666 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:16:28.563625 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:28.567422 systemd-logind[1521]: New session 4 of user core. Feb 13 20:16:28.577444 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:16:28.629539 sshd[1689]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:28.641489 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:39674.service - OpenSSH per-connection server daemon (10.0.0.1:39674). Feb 13 20:16:28.641855 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:39666.service: Deactivated successfully. Feb 13 20:16:28.643685 systemd-logind[1521]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:16:28.644195 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:16:28.645540 systemd-logind[1521]: Removed session 4. Feb 13 20:16:28.673505 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 39674 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:16:28.674620 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:28.678464 systemd-logind[1521]: New session 5 of user core. Feb 13 20:16:28.688448 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:16:28.748407 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:16:28.748683 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:16:29.070458 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:16:29.070613 (dockerd)[1723]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:16:29.351443 dockerd[1723]: time="2025-02-13T20:16:29.351317343Z" level=info msg="Starting up" Feb 13 20:16:29.644931 dockerd[1723]: time="2025-02-13T20:16:29.644816960Z" level=info msg="Loading containers: start." Feb 13 20:16:29.733290 kernel: Initializing XFRM netlink socket Feb 13 20:16:29.800428 systemd-networkd[1224]: docker0: Link UP Feb 13 20:16:29.819527 dockerd[1723]: time="2025-02-13T20:16:29.819480396Z" level=info msg="Loading containers: done." Feb 13 20:16:29.839678 dockerd[1723]: time="2025-02-13T20:16:29.839615936Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:16:29.839838 dockerd[1723]: time="2025-02-13T20:16:29.839731333Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:16:29.839868 dockerd[1723]: time="2025-02-13T20:16:29.839845368Z" level=info msg="Daemon has completed initialization" Feb 13 20:16:29.868709 dockerd[1723]: time="2025-02-13T20:16:29.868276101Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:16:29.868743 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:16:30.415971 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2918425571-merged.mount: Deactivated successfully. Feb 13 20:16:30.671445 containerd[1539]: time="2025-02-13T20:16:30.671200328Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 20:16:31.233347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4119494166.mount: Deactivated successfully. Feb 13 20:16:33.105308 containerd[1539]: time="2025-02-13T20:16:33.105255728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:33.106326 containerd[1539]: time="2025-02-13T20:16:33.106092930Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 20:16:33.106885 containerd[1539]: time="2025-02-13T20:16:33.106855334Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:33.109987 containerd[1539]: time="2025-02-13T20:16:33.109935299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:33.111203 containerd[1539]: time="2025-02-13T20:16:33.111125354Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.439825461s" Feb 13 20:16:33.111203 containerd[1539]: time="2025-02-13T20:16:33.111161271Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 20:16:33.129680 containerd[1539]: time="2025-02-13T20:16:33.129595938Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 20:16:33.288867 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:16:33.298423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:33.384501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:33.388299 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:16:33.426383 kubelet[1952]: E0213 20:16:33.426295 1952 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:16:33.429194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:16:33.429389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:16:35.200235 containerd[1539]: time="2025-02-13T20:16:35.200168186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:35.201178 containerd[1539]: time="2025-02-13T20:16:35.201148414Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 20:16:35.201983 containerd[1539]: time="2025-02-13T20:16:35.201953880Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:35.205502 containerd[1539]: time="2025-02-13T20:16:35.205466173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:35.206545 containerd[1539]: time="2025-02-13T20:16:35.206509980Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 2.076876685s" Feb 13 20:16:35.206588 containerd[1539]: time="2025-02-13T20:16:35.206549136Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 20:16:35.224401 containerd[1539]: time="2025-02-13T20:16:35.224233798Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 20:16:36.527411 containerd[1539]: time="2025-02-13T20:16:36.527359075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:36.528576 containerd[1539]: time="2025-02-13T20:16:36.528541182Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 20:16:36.529232 containerd[1539]: time="2025-02-13T20:16:36.529045459Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:36.532012 containerd[1539]: time="2025-02-13T20:16:36.531949902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:36.533224 containerd[1539]: time="2025-02-13T20:16:36.533136172Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.308867062s" Feb 13 20:16:36.533224 containerd[1539]: time="2025-02-13T20:16:36.533168040Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 20:16:36.550478 containerd[1539]: time="2025-02-13T20:16:36.550416379Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:16:37.695546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount951084895.mount: Deactivated successfully. Feb 13 20:16:38.062234 containerd[1539]: time="2025-02-13T20:16:38.062058310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:38.063689 containerd[1539]: time="2025-02-13T20:16:38.063657210Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 20:16:38.064735 containerd[1539]: time="2025-02-13T20:16:38.064680672Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:38.067044 containerd[1539]: time="2025-02-13T20:16:38.066986272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:38.067810 containerd[1539]: time="2025-02-13T20:16:38.067771631Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.517292638s" Feb 13 20:16:38.067856 containerd[1539]: time="2025-02-13T20:16:38.067810021Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 20:16:38.086659 containerd[1539]: time="2025-02-13T20:16:38.086509255Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:16:38.796969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2640731165.mount: Deactivated successfully. Feb 13 20:16:39.551171 containerd[1539]: time="2025-02-13T20:16:39.551002711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:39.552060 containerd[1539]: time="2025-02-13T20:16:39.551874575Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 20:16:39.552732 containerd[1539]: time="2025-02-13T20:16:39.552671305Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:39.559270 containerd[1539]: time="2025-02-13T20:16:39.559202139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:39.560534 containerd[1539]: time="2025-02-13T20:16:39.560490861Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.473944098s" Feb 13 20:16:39.560534 containerd[1539]: time="2025-02-13T20:16:39.560530129Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 20:16:39.579460 containerd[1539]: time="2025-02-13T20:16:39.579240080Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 20:16:40.099261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2756661946.mount: Deactivated successfully. Feb 13 20:16:40.102781 containerd[1539]: time="2025-02-13T20:16:40.102736710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:40.103447 containerd[1539]: time="2025-02-13T20:16:40.103413564Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 20:16:40.104136 containerd[1539]: time="2025-02-13T20:16:40.104081012Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:40.106303 containerd[1539]: time="2025-02-13T20:16:40.106275044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:40.107298 containerd[1539]: time="2025-02-13T20:16:40.107232206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 527.934966ms" Feb 13 20:16:40.107298 containerd[1539]: time="2025-02-13T20:16:40.107287443Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 20:16:40.124874 containerd[1539]: time="2025-02-13T20:16:40.124824931Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 20:16:40.811366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3244630526.mount: Deactivated successfully. Feb 13 20:16:43.511728 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:16:43.522638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:43.610080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:43.614054 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:16:43.654840 kubelet[2112]: E0213 20:16:43.654753 2112 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:16:43.658167 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:16:43.658367 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:16:43.947431 containerd[1539]: time="2025-02-13T20:16:43.946438842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:43.948004 containerd[1539]: time="2025-02-13T20:16:43.947973331Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 20:16:43.949357 containerd[1539]: time="2025-02-13T20:16:43.949317394Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:43.952011 containerd[1539]: time="2025-02-13T20:16:43.951976904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:43.954298 containerd[1539]: time="2025-02-13T20:16:43.954267291Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.829401534s" Feb 13 20:16:43.954298 containerd[1539]: time="2025-02-13T20:16:43.954301830Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 20:16:49.073183 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:49.083550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:49.097581 systemd[1]: Reloading requested from client PID 2208 ('systemctl') (unit session-5.scope)... Feb 13 20:16:49.097701 systemd[1]: Reloading... Feb 13 20:16:49.159249 zram_generator::config[2246]: No configuration found. Feb 13 20:16:49.261838 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:16:49.310415 systemd[1]: Reloading finished in 212 ms. Feb 13 20:16:49.348127 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:16:49.348189 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:16:49.348479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:49.350461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:49.436195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:49.439904 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:16:49.477584 kubelet[2305]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:16:49.477584 kubelet[2305]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:16:49.477584 kubelet[2305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:16:49.478790 kubelet[2305]: I0213 20:16:49.478450 2305 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:16:50.486022 kubelet[2305]: I0213 20:16:50.485978 2305 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:16:50.486022 kubelet[2305]: I0213 20:16:50.486006 2305 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:16:50.486419 kubelet[2305]: I0213 20:16:50.486201 2305 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:16:50.536864 kubelet[2305]: E0213 20:16:50.536808 2305 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:16:50.536990 kubelet[2305]: I0213 20:16:50.536915 2305 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:16:50.548286 kubelet[2305]: I0213 20:16:50.548264 2305 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:16:50.549563 kubelet[2305]: I0213 20:16:50.549524 2305 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:16:50.549731 kubelet[2305]: I0213 20:16:50.549564 2305 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:16:50.549814 kubelet[2305]: I0213 20:16:50.549791 2305 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:16:50.549814 kubelet[2305]: I0213 20:16:50.549800 2305 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:16:50.550077 kubelet[2305]: I0213 20:16:50.550057 2305 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:16:50.551068 kubelet[2305]: I0213 20:16:50.551048 2305 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:16:50.551102 kubelet[2305]: I0213 20:16:50.551070 2305 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:16:50.551174 kubelet[2305]: I0213 20:16:50.551159 2305 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:16:50.551321 kubelet[2305]: I0213 20:16:50.551307 2305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:16:50.554154 kubelet[2305]: W0213 20:16:50.554107 2305 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:16:50.554154 kubelet[2305]: E0213 20:16:50.554154 2305 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:16:50.554562 kubelet[2305]: W0213 20:16:50.554443 2305 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:16:50.554562 kubelet[2305]: E0213 20:16:50.554473 2305 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:16:50.555237 kubelet[2305]: I0213 20:16:50.555205 2305 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:16:50.555658 kubelet[2305]: I0213 20:16:50.555645 2305 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:16:50.555957 kubelet[2305]: W0213 20:16:50.555936 2305 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:16:50.556727 kubelet[2305]: I0213 20:16:50.556710 2305 server.go:1264] "Started kubelet" Feb 13 20:16:50.557378 kubelet[2305]: I0213 20:16:50.557351 2305 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:16:50.560227 kubelet[2305]: I0213 20:16:50.558894 2305 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:16:50.560227 kubelet[2305]: I0213 20:16:50.557484 2305 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:16:50.560227 kubelet[2305]: I0213 20:16:50.559664 2305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:16:50.560227 kubelet[2305]: I0213 20:16:50.559759 2305 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:16:50.561820 kubelet[2305]: E0213 20:16:50.561673 2305 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ddda40e7e338 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:16:50.556691256 +0000 UTC m=+1.113830311,LastTimestamp:2025-02-13 20:16:50.556691256 +0000 UTC m=+1.113830311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:16:50.566623 kubelet[2305]: E0213 20:16:50.566598 2305 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:16:50.566877 kubelet[2305]: I0213 20:16:50.566865 2305 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:16:50.567048 kubelet[2305]: I0213 20:16:50.567033 2305 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:16:50.568400 kubelet[2305]: W0213 20:16:50.568365 2305 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:16:50.569125 kubelet[2305]: E0213 20:16:50.568417 2305 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:16:50.569125 kubelet[2305]: I0213 20:16:50.569047 2305 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:16:50.570465 kubelet[2305]: E0213 20:16:50.570436 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Feb 13 20:16:50.570939 kubelet[2305]: I0213 20:16:50.570921 2305 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:16:50.570939 kubelet[2305]: I0213 20:16:50.570936 2305 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:16:50.571014 kubelet[2305]: E0213 20:16:50.570966 2305 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:16:50.571014 kubelet[2305]: I0213 20:16:50.571006 2305 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:16:50.580562 kubelet[2305]: I0213 20:16:50.580522 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:16:50.581524 kubelet[2305]: I0213 20:16:50.581494 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:16:50.581569 kubelet[2305]: I0213 20:16:50.581529 2305 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:16:50.581569 kubelet[2305]: I0213 20:16:50.581545 2305 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:16:50.581622 kubelet[2305]: E0213 20:16:50.581590 2305 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:16:50.582201 kubelet[2305]: W0213 20:16:50.582173 2305 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:16:50.582287 kubelet[2305]: E0213 20:16:50.582207 2305 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:16:50.590614 kubelet[2305]: I0213 20:16:50.590598 2305 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:16:50.590614 kubelet[2305]: I0213 20:16:50.590611 2305 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:16:50.590702 kubelet[2305]: I0213 20:16:50.590627 2305 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:16:50.668259 kubelet[2305]: I0213 20:16:50.668230 2305 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:16:50.668594 kubelet[2305]: E0213 20:16:50.668551 2305 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Feb 13 20:16:50.676643 kubelet[2305]: I0213 20:16:50.676621 2305 policy_none.go:49] "None policy: Start" Feb 13 20:16:50.677166 kubelet[2305]: I0213 20:16:50.677147 2305 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:16:50.677166 kubelet[2305]: I0213 20:16:50.677171 2305 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:16:50.682263 kubelet[2305]: E0213 20:16:50.681870 2305 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:16:50.682263 kubelet[2305]: I0213 20:16:50.682056 2305 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:16:50.682263 kubelet[2305]: I0213 20:16:50.682237 2305 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:16:50.682382 kubelet[2305]: I0213 20:16:50.682334 2305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:16:50.684188 kubelet[2305]: E0213 20:16:50.684156 2305 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 20:16:50.771861 kubelet[2305]: E0213 20:16:50.771803 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Feb 13 20:16:50.870034 kubelet[2305]: I0213 20:16:50.870007 2305 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:16:50.870324 kubelet[2305]: E0213 20:16:50.870293 2305 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Feb 13 20:16:50.882533 kubelet[2305]: I0213 20:16:50.882485 2305 topology_manager.go:215] "Topology Admit Handler" podUID="ddde3c4d6e99fa315afbf28cd3d02c93" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 20:16:50.883259 kubelet[2305]: I0213 20:16:50.883229 2305 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 20:16:50.885463 kubelet[2305]: I0213 20:16:50.884705 2305 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 20:16:50.971625 kubelet[2305]: I0213 20:16:50.971594 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:16:50.971793 kubelet[2305]: I0213 20:16:50.971767 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:16:50.971873 kubelet[2305]: I0213 20:16:50.971861 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddde3c4d6e99fa315afbf28cd3d02c93-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ddde3c4d6e99fa315afbf28cd3d02c93\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:16:50.971958 kubelet[2305]: I0213 20:16:50.971946 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:16:50.972048 kubelet[2305]: I0213 20:16:50.972037 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:16:50.972155 kubelet[2305]: I0213 20:16:50.972141 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:16:50.972261 kubelet[2305]: I0213 20:16:50.972248 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddde3c4d6e99fa315afbf28cd3d02c93-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ddde3c4d6e99fa315afbf28cd3d02c93\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:16:50.972395 kubelet[2305]: I0213 20:16:50.972326 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddde3c4d6e99fa315afbf28cd3d02c93-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ddde3c4d6e99fa315afbf28cd3d02c93\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:16:50.972470 kubelet[2305]: I0213 20:16:50.972367 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:16:51.172596 kubelet[2305]: E0213 20:16:51.172480 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Feb 13 20:16:51.188788 kubelet[2305]: E0213 20:16:51.188762 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:51.189445 containerd[1539]: time="2025-02-13T20:16:51.189356438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 20:16:51.190621 kubelet[2305]: E0213 20:16:51.190504 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:51.191040 kubelet[2305]: E0213 20:16:51.190674 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:51.191100 containerd[1539]: time="2025-02-13T20:16:51.190810078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ddde3c4d6e99fa315afbf28cd3d02c93,Namespace:kube-system,Attempt:0,}" Feb 13 20:16:51.191100 containerd[1539]: time="2025-02-13T20:16:51.190990657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 20:16:51.272315 kubelet[2305]: I0213 20:16:51.272282 2305 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:16:51.272572 kubelet[2305]: E0213 20:16:51.272551 2305 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Feb 13 20:16:51.417725 kubelet[2305]: W0213 20:16:51.417660 2305 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:16:51.417725 kubelet[2305]: E0213 20:16:51.417725 2305 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:16:51.524557 kubelet[2305]: W0213 20:16:51.524524 2305 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:16:51.524892 kubelet[2305]: E0213 20:16:51.524564 2305 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:16:51.649957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4016908070.mount: Deactivated successfully. Feb 13 20:16:51.653680 containerd[1539]: time="2025-02-13T20:16:51.653630701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:16:51.655067 containerd[1539]: time="2025-02-13T20:16:51.655027082Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:16:51.656266 containerd[1539]: time="2025-02-13T20:16:51.656242043Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 20:16:51.656828 containerd[1539]: time="2025-02-13T20:16:51.656800387Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:16:51.657439 containerd[1539]: time="2025-02-13T20:16:51.657415310Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:16:51.658054 containerd[1539]: time="2025-02-13T20:16:51.657918156Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:16:51.658757 containerd[1539]: time="2025-02-13T20:16:51.658723982Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:16:51.662555 containerd[1539]: time="2025-02-13T20:16:51.662498548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:16:51.664251 containerd[1539]: time="2025-02-13T20:16:51.663365714Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 473.934491ms" Feb 13 20:16:51.666609 containerd[1539]: time="2025-02-13T20:16:51.666572252Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 475.529857ms" Feb 13 20:16:51.667364 containerd[1539]: time="2025-02-13T20:16:51.667317618Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 476.450282ms" Feb 13 20:16:51.794338 containerd[1539]: time="2025-02-13T20:16:51.794100100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:51.794338 containerd[1539]: time="2025-02-13T20:16:51.794149916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:51.794338 containerd[1539]: time="2025-02-13T20:16:51.794174204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:51.794338 containerd[1539]: time="2025-02-13T20:16:51.794280079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:51.797279 containerd[1539]: time="2025-02-13T20:16:51.797183677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:51.797279 containerd[1539]: time="2025-02-13T20:16:51.797245458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:51.797279 containerd[1539]: time="2025-02-13T20:16:51.797261343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:51.797406 containerd[1539]: time="2025-02-13T20:16:51.797341649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:51.799283 containerd[1539]: time="2025-02-13T20:16:51.799029927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:51.799283 containerd[1539]: time="2025-02-13T20:16:51.799085105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:51.799283 containerd[1539]: time="2025-02-13T20:16:51.799096028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:51.799283 containerd[1539]: time="2025-02-13T20:16:51.799171733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:51.836043 containerd[1539]: time="2025-02-13T20:16:51.835986643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"85d0e2da98b1b516b678930e9d397471827786a6e7c9cdc2b31ee245cb05acb6\"" Feb 13 20:16:51.837743 kubelet[2305]: E0213 20:16:51.837717 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:51.840289 containerd[1539]: time="2025-02-13T20:16:51.838534244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"8034a36c0612f77374e0ede99f76149299c599811d924ca87d4be03cd598e437\"" Feb 13 20:16:51.843536 kubelet[2305]: E0213 20:16:51.843509 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:51.843606 containerd[1539]: time="2025-02-13T20:16:51.843527332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ddde3c4d6e99fa315afbf28cd3d02c93,Namespace:kube-system,Attempt:0,} returns sandbox id \"e02eedd8e6f004360cf6aaf3c0c97081e2c659194f650c7598e4846b49bc6c9d\"" Feb 13 20:16:51.844005 kubelet[2305]: E0213 20:16:51.843986 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:51.848079 containerd[1539]: time="2025-02-13T20:16:51.848051385Z" level=info msg="CreateContainer within sandbox \"85d0e2da98b1b516b678930e9d397471827786a6e7c9cdc2b31ee245cb05acb6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:16:51.848079 containerd[1539]: time="2025-02-13T20:16:51.848084076Z" level=info msg="CreateContainer within sandbox \"8034a36c0612f77374e0ede99f76149299c599811d924ca87d4be03cd598e437\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:16:51.848699 containerd[1539]: time="2025-02-13T20:16:51.848659066Z" level=info msg="CreateContainer within sandbox \"e02eedd8e6f004360cf6aaf3c0c97081e2c659194f650c7598e4846b49bc6c9d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:16:51.864496 containerd[1539]: time="2025-02-13T20:16:51.864456239Z" level=info msg="CreateContainer within sandbox \"e02eedd8e6f004360cf6aaf3c0c97081e2c659194f650c7598e4846b49bc6c9d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b5c35ce370c5a9c86f78218fe9443ba104bfc2bfa5c039002325b6537e540646\"" Feb 13 20:16:51.864998 containerd[1539]: time="2025-02-13T20:16:51.864971249Z" level=info msg="StartContainer for \"b5c35ce370c5a9c86f78218fe9443ba104bfc2bfa5c039002325b6537e540646\"" Feb 13 20:16:51.869620 containerd[1539]: time="2025-02-13T20:16:51.869587332Z" level=info msg="CreateContainer within sandbox \"85d0e2da98b1b516b678930e9d397471827786a6e7c9cdc2b31ee245cb05acb6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3e47fbc2f8db2e1079261c91958a021c2a089f144bf84556cf565711513435a5\"" Feb 13 20:16:51.870200 containerd[1539]: time="2025-02-13T20:16:51.870176247Z" level=info msg="StartContainer for \"3e47fbc2f8db2e1079261c91958a021c2a089f144bf84556cf565711513435a5\"" Feb 13 20:16:51.870725 containerd[1539]: time="2025-02-13T20:16:51.870687135Z" level=info msg="CreateContainer within sandbox \"8034a36c0612f77374e0ede99f76149299c599811d924ca87d4be03cd598e437\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5985dcfed5fdd033aed877b5807db3ff6fe8befdb82de685004fd29de8073b86\"" Feb 13 20:16:51.871847 containerd[1539]: time="2025-02-13T20:16:51.870982473Z" level=info msg="StartContainer for \"5985dcfed5fdd033aed877b5807db3ff6fe8befdb82de685004fd29de8073b86\"" Feb 13 20:16:51.920056 containerd[1539]: time="2025-02-13T20:16:51.920014295Z" level=info msg="StartContainer for \"5985dcfed5fdd033aed877b5807db3ff6fe8befdb82de685004fd29de8073b86\" returns successfully" Feb 13 20:16:51.922755 containerd[1539]: time="2025-02-13T20:16:51.922713986Z" level=info msg="StartContainer for \"b5c35ce370c5a9c86f78218fe9443ba104bfc2bfa5c039002325b6537e540646\" returns successfully" Feb 13 20:16:51.934694 containerd[1539]: time="2025-02-13T20:16:51.934665210Z" level=info msg="StartContainer for \"3e47fbc2f8db2e1079261c91958a021c2a089f144bf84556cf565711513435a5\" returns successfully" Feb 13 20:16:51.972847 kubelet[2305]: E0213 20:16:51.972809 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="1.6s" Feb 13 20:16:52.039850 kubelet[2305]: W0213 20:16:52.039735 2305 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:16:52.040242 kubelet[2305]: E0213 20:16:52.040201 2305 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:16:52.075349 kubelet[2305]: I0213 20:16:52.075260 2305 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:16:52.075765 kubelet[2305]: E0213 20:16:52.075729 2305 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Feb 13 20:16:52.095496 kubelet[2305]: W0213 20:16:52.095399 2305 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:16:52.095496 kubelet[2305]: E0213 20:16:52.095474 2305 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:16:52.588523 kubelet[2305]: E0213 20:16:52.587750 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:52.589332 kubelet[2305]: E0213 20:16:52.589312 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:52.592709 kubelet[2305]: E0213 20:16:52.592692 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:53.555518 kubelet[2305]: I0213 20:16:53.555466 2305 apiserver.go:52] "Watching apiserver" Feb 13 20:16:53.567709 kubelet[2305]: I0213 20:16:53.567518 2305 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:16:53.577982 kubelet[2305]: E0213 20:16:53.577940 2305 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 20:16:53.595353 kubelet[2305]: E0213 20:16:53.595325 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:53.595808 kubelet[2305]: E0213 20:16:53.595787 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:53.677028 kubelet[2305]: I0213 20:16:53.676989 2305 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:16:53.683232 kubelet[2305]: I0213 20:16:53.682308 2305 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 20:16:54.251917 kubelet[2305]: E0213 20:16:54.251881 2305 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:16:54.252309 kubelet[2305]: E0213 20:16:54.252288 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:55.394752 systemd[1]: Reloading requested from client PID 2588 ('systemctl') (unit session-5.scope)... Feb 13 20:16:55.394765 systemd[1]: Reloading... Feb 13 20:16:55.457247 zram_generator::config[2627]: No configuration found. Feb 13 20:16:55.618366 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:16:55.673380 systemd[1]: Reloading finished in 278 ms. Feb 13 20:16:55.697817 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:55.715095 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:16:55.715398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:55.725506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:55.810267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:55.814397 (kubelet)[2679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:16:55.855155 kubelet[2679]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:16:55.855155 kubelet[2679]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:16:55.855155 kubelet[2679]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:16:55.855544 kubelet[2679]: I0213 20:16:55.855190 2679 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:16:55.861432 kubelet[2679]: I0213 20:16:55.860216 2679 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:16:55.861432 kubelet[2679]: I0213 20:16:55.860238 2679 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:16:55.861432 kubelet[2679]: I0213 20:16:55.860395 2679 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:16:55.862371 kubelet[2679]: I0213 20:16:55.862354 2679 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:16:55.863645 kubelet[2679]: I0213 20:16:55.863618 2679 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:16:55.868447 kubelet[2679]: I0213 20:16:55.868424 2679 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:16:55.868876 kubelet[2679]: I0213 20:16:55.868842 2679 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:16:55.869037 kubelet[2679]: I0213 20:16:55.868872 2679 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:16:55.869037 kubelet[2679]: I0213 20:16:55.869033 2679 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:16:55.869130 kubelet[2679]: I0213 20:16:55.869042 2679 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:16:55.869130 kubelet[2679]: I0213 20:16:55.869083 2679 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:16:55.869200 kubelet[2679]: I0213 20:16:55.869191 2679 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:16:55.869240 kubelet[2679]: I0213 20:16:55.869204 2679 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:16:55.869261 kubelet[2679]: I0213 20:16:55.869254 2679 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:16:55.869283 kubelet[2679]: I0213 20:16:55.869272 2679 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:16:55.870206 kubelet[2679]: I0213 20:16:55.870181 2679 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:16:55.873225 kubelet[2679]: I0213 20:16:55.870404 2679 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:16:55.873225 kubelet[2679]: I0213 20:16:55.870758 2679 server.go:1264] "Started kubelet" Feb 13 20:16:55.877267 kubelet[2679]: I0213 20:16:55.873768 2679 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:16:55.877267 kubelet[2679]: I0213 20:16:55.874347 2679 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:16:55.877267 kubelet[2679]: I0213 20:16:55.876011 2679 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:16:55.879273 kubelet[2679]: I0213 20:16:55.879091 2679 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:16:55.879579 kubelet[2679]: I0213 20:16:55.879556 2679 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:16:55.883559 kubelet[2679]: I0213 20:16:55.883530 2679 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:16:55.883665 kubelet[2679]: I0213 20:16:55.883649 2679 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:16:55.886128 kubelet[2679]: I0213 20:16:55.883800 2679 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:16:55.886747 kubelet[2679]: I0213 20:16:55.886724 2679 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:16:55.889202 kubelet[2679]: E0213 20:16:55.889175 2679 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:16:55.895755 kubelet[2679]: I0213 20:16:55.895713 2679 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:16:55.898061 kubelet[2679]: I0213 20:16:55.898044 2679 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:16:55.903512 kubelet[2679]: I0213 20:16:55.903473 2679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:16:55.904285 kubelet[2679]: I0213 20:16:55.904260 2679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:16:55.904334 kubelet[2679]: I0213 20:16:55.904295 2679 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:16:55.904334 kubelet[2679]: I0213 20:16:55.904311 2679 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:16:55.904377 kubelet[2679]: E0213 20:16:55.904346 2679 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:16:55.937889 kubelet[2679]: I0213 20:16:55.937436 2679 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:16:55.937889 kubelet[2679]: I0213 20:16:55.937453 2679 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:16:55.937889 kubelet[2679]: I0213 20:16:55.937473 2679 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:16:55.937889 kubelet[2679]: I0213 20:16:55.937617 2679 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:16:55.937889 kubelet[2679]: I0213 20:16:55.937627 2679 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:16:55.937889 kubelet[2679]: I0213 20:16:55.937644 2679 policy_none.go:49] "None policy: Start" Feb 13 20:16:55.938855 kubelet[2679]: I0213 20:16:55.938615 2679 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:16:55.938855 kubelet[2679]: I0213 20:16:55.938640 2679 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:16:55.938855 kubelet[2679]: I0213 20:16:55.938770 2679 state_mem.go:75] "Updated machine memory state" Feb 13 20:16:55.939821 kubelet[2679]: I0213 20:16:55.939790 2679 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:16:55.939982 kubelet[2679]: I0213 20:16:55.939951 2679 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:16:55.940058 kubelet[2679]: I0213 20:16:55.940047 2679 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:16:55.987729 kubelet[2679]: I0213 20:16:55.987704 2679 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:16:55.993365 kubelet[2679]: I0213 20:16:55.993340 2679 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 20:16:55.993452 kubelet[2679]: I0213 20:16:55.993415 2679 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 20:16:56.005133 kubelet[2679]: I0213 20:16:56.005088 2679 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 20:16:56.005312 kubelet[2679]: I0213 20:16:56.005285 2679 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 20:16:56.005374 kubelet[2679]: I0213 20:16:56.005347 2679 topology_manager.go:215] "Topology Admit Handler" podUID="ddde3c4d6e99fa315afbf28cd3d02c93" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 20:16:56.185691 kubelet[2679]: I0213 20:16:56.185454 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:16:56.185691 kubelet[2679]: I0213 20:16:56.185493 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:16:56.185691 kubelet[2679]: I0213 20:16:56.185513 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:16:56.185691 kubelet[2679]: I0213 20:16:56.185537 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:16:56.185691 kubelet[2679]: I0213 20:16:56.185555 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddde3c4d6e99fa315afbf28cd3d02c93-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ddde3c4d6e99fa315afbf28cd3d02c93\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:16:56.185899 kubelet[2679]: I0213 20:16:56.185572 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddde3c4d6e99fa315afbf28cd3d02c93-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ddde3c4d6e99fa315afbf28cd3d02c93\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:16:56.185899 kubelet[2679]: I0213 20:16:56.185597 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:16:56.185899 kubelet[2679]: I0213 20:16:56.185618 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:16:56.185899 kubelet[2679]: I0213 20:16:56.185633 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddde3c4d6e99fa315afbf28cd3d02c93-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ddde3c4d6e99fa315afbf28cd3d02c93\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:16:56.318052 kubelet[2679]: E0213 20:16:56.318026 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:56.318653 kubelet[2679]: E0213 20:16:56.318433 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:56.318653 kubelet[2679]: E0213 20:16:56.318587 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:56.870471 kubelet[2679]: I0213 20:16:56.870392 2679 apiserver.go:52] "Watching apiserver" Feb 13 20:16:56.884208 kubelet[2679]: I0213 20:16:56.884165 2679 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:16:56.914927 kubelet[2679]: E0213 20:16:56.914559 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:56.914927 kubelet[2679]: E0213 20:16:56.914740 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:56.914927 kubelet[2679]: E0213 20:16:56.914738 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:56.939093 kubelet[2679]: I0213 20:16:56.939032 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.939016959 podStartE2EDuration="939.016959ms" podCreationTimestamp="2025-02-13 20:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:16:56.937638989 +0000 UTC m=+1.119719203" watchObservedRunningTime="2025-02-13 20:16:56.939016959 +0000 UTC m=+1.121097173" Feb 13 20:16:56.939235 kubelet[2679]: I0213 20:16:56.939131 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.939126105 podStartE2EDuration="939.126105ms" podCreationTimestamp="2025-02-13 20:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:16:56.931392656 +0000 UTC m=+1.113472830" watchObservedRunningTime="2025-02-13 20:16:56.939126105 +0000 UTC m=+1.121206279" Feb 13 20:16:56.947956 kubelet[2679]: I0213 20:16:56.947892 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.947857152 podStartE2EDuration="947.857152ms" podCreationTimestamp="2025-02-13 20:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:16:56.947040756 +0000 UTC m=+1.129120890" watchObservedRunningTime="2025-02-13 20:16:56.947857152 +0000 UTC m=+1.129937326" Feb 13 20:16:57.127410 sudo[1704]: pam_unix(sudo:session): session closed for user root Feb 13 20:16:57.129036 sshd[1697]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:57.132179 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:39674.service: Deactivated successfully. Feb 13 20:16:57.134579 systemd-logind[1521]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:16:57.134736 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:16:57.136217 systemd-logind[1521]: Removed session 5. Feb 13 20:16:57.916234 kubelet[2679]: E0213 20:16:57.916061 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:16:59.307463 kubelet[2679]: E0213 20:16:59.307430 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:01.488845 kubelet[2679]: E0213 20:17:01.488776 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:01.577595 kubelet[2679]: E0213 20:17:01.577551 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:01.924064 kubelet[2679]: E0213 20:17:01.922310 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:01.924064 kubelet[2679]: E0213 20:17:01.922994 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:02.923919 kubelet[2679]: E0213 20:17:02.923832 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:07.099853 update_engine[1527]: I20250213 20:17:07.099781 1527 update_attempter.cc:509] Updating boot flags... Feb 13 20:17:07.123267 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2751) Feb 13 20:17:07.145271 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2753) Feb 13 20:17:09.314889 kubelet[2679]: E0213 20:17:09.314811 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:09.890157 kubelet[2679]: I0213 20:17:09.890120 2679 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:17:09.891040 containerd[1539]: time="2025-02-13T20:17:09.890934678Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:17:09.891976 kubelet[2679]: I0213 20:17:09.891173 2679 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:17:10.648873 kubelet[2679]: I0213 20:17:10.648810 2679 topology_manager.go:215] "Topology Admit Handler" podUID="97bb4c5b-b519-476d-9b98-54fe5b11faf2" podNamespace="kube-system" podName="kube-proxy-7plvd" Feb 13 20:17:10.651163 kubelet[2679]: I0213 20:17:10.651134 2679 topology_manager.go:215] "Topology Admit Handler" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" podNamespace="kube-flannel" podName="kube-flannel-ds-ftfqg" Feb 13 20:17:10.681942 kubelet[2679]: I0213 20:17:10.681899 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/97bb4c5b-b519-476d-9b98-54fe5b11faf2-kube-proxy\") pod \"kube-proxy-7plvd\" (UID: \"97bb4c5b-b519-476d-9b98-54fe5b11faf2\") " pod="kube-system/kube-proxy-7plvd" Feb 13 20:17:10.681942 kubelet[2679]: I0213 20:17:10.681941 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc5pr\" (UniqueName: \"kubernetes.io/projected/97bb4c5b-b519-476d-9b98-54fe5b11faf2-kube-api-access-wc5pr\") pod \"kube-proxy-7plvd\" (UID: \"97bb4c5b-b519-476d-9b98-54fe5b11faf2\") " pod="kube-system/kube-proxy-7plvd" Feb 13 20:17:10.682109 kubelet[2679]: I0213 20:17:10.681965 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/36c00cd4-9622-461d-ac3c-90892608fdc2-cni\") pod \"kube-flannel-ds-ftfqg\" (UID: \"36c00cd4-9622-461d-ac3c-90892608fdc2\") " pod="kube-flannel/kube-flannel-ds-ftfqg" Feb 13 20:17:10.682109 kubelet[2679]: I0213 20:17:10.681983 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/36c00cd4-9622-461d-ac3c-90892608fdc2-flannel-cfg\") pod \"kube-flannel-ds-ftfqg\" (UID: \"36c00cd4-9622-461d-ac3c-90892608fdc2\") " pod="kube-flannel/kube-flannel-ds-ftfqg" Feb 13 20:17:10.682109 kubelet[2679]: I0213 20:17:10.682001 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97bb4c5b-b519-476d-9b98-54fe5b11faf2-lib-modules\") pod \"kube-proxy-7plvd\" (UID: \"97bb4c5b-b519-476d-9b98-54fe5b11faf2\") " pod="kube-system/kube-proxy-7plvd" Feb 13 20:17:10.682109 kubelet[2679]: I0213 20:17:10.682020 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97bb4c5b-b519-476d-9b98-54fe5b11faf2-xtables-lock\") pod \"kube-proxy-7plvd\" (UID: \"97bb4c5b-b519-476d-9b98-54fe5b11faf2\") " pod="kube-system/kube-proxy-7plvd" Feb 13 20:17:10.682109 kubelet[2679]: I0213 20:17:10.682056 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/36c00cd4-9622-461d-ac3c-90892608fdc2-run\") pod \"kube-flannel-ds-ftfqg\" (UID: \"36c00cd4-9622-461d-ac3c-90892608fdc2\") " pod="kube-flannel/kube-flannel-ds-ftfqg" Feb 13 20:17:10.682224 kubelet[2679]: I0213 20:17:10.682101 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36c00cd4-9622-461d-ac3c-90892608fdc2-xtables-lock\") pod \"kube-flannel-ds-ftfqg\" (UID: \"36c00cd4-9622-461d-ac3c-90892608fdc2\") " pod="kube-flannel/kube-flannel-ds-ftfqg" Feb 13 20:17:10.682224 kubelet[2679]: I0213 20:17:10.682125 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/36c00cd4-9622-461d-ac3c-90892608fdc2-cni-plugin\") pod \"kube-flannel-ds-ftfqg\" (UID: \"36c00cd4-9622-461d-ac3c-90892608fdc2\") " pod="kube-flannel/kube-flannel-ds-ftfqg" Feb 13 20:17:10.682224 kubelet[2679]: I0213 20:17:10.682150 2679 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdrbt\" (UniqueName: \"kubernetes.io/projected/36c00cd4-9622-461d-ac3c-90892608fdc2-kube-api-access-sdrbt\") pod \"kube-flannel-ds-ftfqg\" (UID: \"36c00cd4-9622-461d-ac3c-90892608fdc2\") " pod="kube-flannel/kube-flannel-ds-ftfqg" Feb 13 20:17:10.951230 kubelet[2679]: E0213 20:17:10.951095 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:10.952322 containerd[1539]: time="2025-02-13T20:17:10.952058961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7plvd,Uid:97bb4c5b-b519-476d-9b98-54fe5b11faf2,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:10.962206 kubelet[2679]: E0213 20:17:10.961948 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:10.963474 containerd[1539]: time="2025-02-13T20:17:10.963435783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-ftfqg,Uid:36c00cd4-9622-461d-ac3c-90892608fdc2,Namespace:kube-flannel,Attempt:0,}" Feb 13 20:17:10.979108 containerd[1539]: time="2025-02-13T20:17:10.978568729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:10.979108 containerd[1539]: time="2025-02-13T20:17:10.979071138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:10.979108 containerd[1539]: time="2025-02-13T20:17:10.979086419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:10.979315 containerd[1539]: time="2025-02-13T20:17:10.979228393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:10.987646 containerd[1539]: time="2025-02-13T20:17:10.987390983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:10.987646 containerd[1539]: time="2025-02-13T20:17:10.987446629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:10.987646 containerd[1539]: time="2025-02-13T20:17:10.987461630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:10.987646 containerd[1539]: time="2025-02-13T20:17:10.987536558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:11.012521 containerd[1539]: time="2025-02-13T20:17:11.012466304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7plvd,Uid:97bb4c5b-b519-476d-9b98-54fe5b11faf2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ace09b8b8bcae31e235f0eb22e9cdd97c5474a72089b21861de05b6a187dcdc7\"" Feb 13 20:17:11.013381 kubelet[2679]: E0213 20:17:11.013299 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:11.021985 containerd[1539]: time="2025-02-13T20:17:11.021942564Z" level=info msg="CreateContainer within sandbox \"ace09b8b8bcae31e235f0eb22e9cdd97c5474a72089b21861de05b6a187dcdc7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:17:11.033331 containerd[1539]: time="2025-02-13T20:17:11.033289675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-ftfqg,Uid:36c00cd4-9622-461d-ac3c-90892608fdc2,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"b618db7db9f56895fd4f0e241d2d4f6a7f21c944d5d25cf91d67d2ed86703da0\"" Feb 13 20:17:11.034111 kubelet[2679]: E0213 20:17:11.034083 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:11.035461 containerd[1539]: time="2025-02-13T20:17:11.035333500Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:17:11.044984 containerd[1539]: time="2025-02-13T20:17:11.044933492Z" level=info msg="CreateContainer within sandbox \"ace09b8b8bcae31e235f0eb22e9cdd97c5474a72089b21861de05b6a187dcdc7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"606af375609c1b98cc3e61e937cd6e72f640f6a83c0b6b7a42bc02be8805943c\"" Feb 13 20:17:11.045536 containerd[1539]: time="2025-02-13T20:17:11.045482982Z" level=info msg="StartContainer for \"606af375609c1b98cc3e61e937cd6e72f640f6a83c0b6b7a42bc02be8805943c\"" Feb 13 20:17:11.097504 containerd[1539]: time="2025-02-13T20:17:11.097369493Z" level=info msg="StartContainer for \"606af375609c1b98cc3e61e937cd6e72f640f6a83c0b6b7a42bc02be8805943c\" returns successfully" Feb 13 20:17:11.942155 kubelet[2679]: E0213 20:17:11.941402 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:11.951668 kubelet[2679]: I0213 20:17:11.951547 2679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7plvd" podStartSLOduration=1.951530289 podStartE2EDuration="1.951530289s" podCreationTimestamp="2025-02-13 20:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:11.950276776 +0000 UTC m=+16.132356950" watchObservedRunningTime="2025-02-13 20:17:11.951530289 +0000 UTC m=+16.133610463" Feb 13 20:17:12.171840 containerd[1539]: time="2025-02-13T20:17:12.171787121Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:17:12.172309 containerd[1539]: time="2025-02-13T20:17:12.171866688Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:17:12.172351 kubelet[2679]: E0213 20:17:12.172028 2679 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:17:12.172351 kubelet[2679]: E0213 20:17:12.172092 2679 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:17:12.172448 kubelet[2679]: E0213 20:17:12.172291 2679 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdrbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-ftfqg_kube-flannel(36c00cd4-9622-461d-ac3c-90892608fdc2): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:17:12.172498 kubelet[2679]: E0213 20:17:12.172324 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:17:12.942460 kubelet[2679]: E0213 20:17:12.942431 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:12.943293 kubelet[2679]: E0213 20:17:12.943239 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:17:21.333545 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:47696.service - OpenSSH per-connection server daemon (10.0.0.1:47696). Feb 13 20:17:21.366969 sshd[3002]: Accepted publickey for core from 10.0.0.1 port 47696 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:21.368166 sshd[3002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:21.372186 systemd-logind[1521]: New session 6 of user core. Feb 13 20:17:21.384532 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:17:21.501033 sshd[3002]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:21.504476 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:47696.service: Deactivated successfully. Feb 13 20:17:21.506749 systemd-logind[1521]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:17:21.507360 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:17:21.508655 systemd-logind[1521]: Removed session 6. Feb 13 20:17:23.907895 kubelet[2679]: E0213 20:17:23.906178 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:23.908308 containerd[1539]: time="2025-02-13T20:17:23.907472089Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:17:25.022067 containerd[1539]: time="2025-02-13T20:17:25.022006937Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:17:25.022534 containerd[1539]: time="2025-02-13T20:17:25.022093300Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:17:25.022825 kubelet[2679]: E0213 20:17:25.022618 2679 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:17:25.022825 kubelet[2679]: E0213 20:17:25.022669 2679 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:17:25.023124 kubelet[2679]: E0213 20:17:25.022763 2679 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdrbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-ftfqg_kube-flannel(36c00cd4-9622-461d-ac3c-90892608fdc2): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:17:25.023224 kubelet[2679]: E0213 20:17:25.022792 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:17:26.524507 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:60488.service - OpenSSH per-connection server daemon (10.0.0.1:60488). Feb 13 20:17:26.557587 sshd[3018]: Accepted publickey for core from 10.0.0.1 port 60488 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:26.558691 sshd[3018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:26.562432 systemd-logind[1521]: New session 7 of user core. Feb 13 20:17:26.576435 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:17:26.680885 sshd[3018]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:26.684439 systemd[1]: sshd@6-10.0.0.6:22-10.0.0.1:60488.service: Deactivated successfully. Feb 13 20:17:26.686719 systemd-logind[1521]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:17:26.686729 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:17:26.688242 systemd-logind[1521]: Removed session 7. Feb 13 20:17:31.697673 systemd[1]: Started sshd@7-10.0.0.6:22-10.0.0.1:60494.service - OpenSSH per-connection server daemon (10.0.0.1:60494). Feb 13 20:17:31.729332 sshd[3034]: Accepted publickey for core from 10.0.0.1 port 60494 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:31.730510 sshd[3034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:31.734448 systemd-logind[1521]: New session 8 of user core. Feb 13 20:17:31.745449 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:17:31.853478 sshd[3034]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:31.856997 systemd[1]: sshd@7-10.0.0.6:22-10.0.0.1:60494.service: Deactivated successfully. Feb 13 20:17:31.859907 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:17:31.860608 systemd-logind[1521]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:17:31.861414 systemd-logind[1521]: Removed session 8. Feb 13 20:17:36.876503 systemd[1]: Started sshd@8-10.0.0.6:22-10.0.0.1:32890.service - OpenSSH per-connection server daemon (10.0.0.1:32890). Feb 13 20:17:36.908479 sshd[3052]: Accepted publickey for core from 10.0.0.1 port 32890 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:36.909615 sshd[3052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:36.913271 systemd-logind[1521]: New session 9 of user core. Feb 13 20:17:36.926460 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:17:37.038420 sshd[3052]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:37.041033 systemd[1]: sshd@8-10.0.0.6:22-10.0.0.1:32890.service: Deactivated successfully. Feb 13 20:17:37.043519 systemd-logind[1521]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:17:37.043704 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:17:37.044825 systemd-logind[1521]: Removed session 9. Feb 13 20:17:38.905388 kubelet[2679]: E0213 20:17:38.905287 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:38.906062 kubelet[2679]: E0213 20:17:38.906020 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:17:42.047483 systemd[1]: Started sshd@9-10.0.0.6:22-10.0.0.1:32900.service - OpenSSH per-connection server daemon (10.0.0.1:32900). Feb 13 20:17:42.081704 sshd[3070]: Accepted publickey for core from 10.0.0.1 port 32900 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:42.082942 sshd[3070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:42.086840 systemd-logind[1521]: New session 10 of user core. Feb 13 20:17:42.096471 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:17:42.200727 sshd[3070]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:42.203937 systemd-logind[1521]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:17:42.204201 systemd[1]: sshd@9-10.0.0.6:22-10.0.0.1:32900.service: Deactivated successfully. Feb 13 20:17:42.205822 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:17:42.207508 systemd-logind[1521]: Removed session 10. Feb 13 20:17:47.211465 systemd[1]: Started sshd@10-10.0.0.6:22-10.0.0.1:60794.service - OpenSSH per-connection server daemon (10.0.0.1:60794). Feb 13 20:17:47.244251 sshd[3086]: Accepted publickey for core from 10.0.0.1 port 60794 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:47.244785 sshd[3086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:47.248098 systemd-logind[1521]: New session 11 of user core. Feb 13 20:17:47.257524 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:17:47.366442 sshd[3086]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:47.369717 systemd[1]: sshd@10-10.0.0.6:22-10.0.0.1:60794.service: Deactivated successfully. Feb 13 20:17:47.371752 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:17:47.372293 systemd-logind[1521]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:17:47.373104 systemd-logind[1521]: Removed session 11. Feb 13 20:17:52.378466 systemd[1]: Started sshd@11-10.0.0.6:22-10.0.0.1:60800.service - OpenSSH per-connection server daemon (10.0.0.1:60800). Feb 13 20:17:52.410815 sshd[3102]: Accepted publickey for core from 10.0.0.1 port 60800 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:52.411969 sshd[3102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:52.415779 systemd-logind[1521]: New session 12 of user core. Feb 13 20:17:52.422439 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:17:52.524399 sshd[3102]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:52.527882 systemd[1]: sshd@11-10.0.0.6:22-10.0.0.1:60800.service: Deactivated successfully. Feb 13 20:17:52.529636 systemd-logind[1521]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:17:52.529786 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:17:52.531048 systemd-logind[1521]: Removed session 12. Feb 13 20:17:52.905843 kubelet[2679]: E0213 20:17:52.905803 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:52.909412 containerd[1539]: time="2025-02-13T20:17:52.908778668Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:17:54.041871 containerd[1539]: time="2025-02-13T20:17:54.041770260Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:17:54.041871 containerd[1539]: time="2025-02-13T20:17:54.041793300Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:17:54.042453 kubelet[2679]: E0213 20:17:54.041929 2679 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:17:54.042453 kubelet[2679]: E0213 20:17:54.041970 2679 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:17:54.042722 kubelet[2679]: E0213 20:17:54.042051 2679 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdrbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-ftfqg_kube-flannel(36c00cd4-9622-461d-ac3c-90892608fdc2): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:17:54.042775 kubelet[2679]: E0213 20:17:54.042088 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:17:57.535443 systemd[1]: Started sshd@12-10.0.0.6:22-10.0.0.1:51950.service - OpenSSH per-connection server daemon (10.0.0.1:51950). Feb 13 20:17:57.567318 sshd[3120]: Accepted publickey for core from 10.0.0.1 port 51950 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:57.568480 sshd[3120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:57.571951 systemd-logind[1521]: New session 13 of user core. Feb 13 20:17:57.578589 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:17:57.684926 sshd[3120]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:57.688143 systemd[1]: sshd@12-10.0.0.6:22-10.0.0.1:51950.service: Deactivated successfully. Feb 13 20:17:57.689981 systemd-logind[1521]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:17:57.690059 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:17:57.691347 systemd-logind[1521]: Removed session 13. Feb 13 20:18:02.700589 systemd[1]: Started sshd@13-10.0.0.6:22-10.0.0.1:39678.service - OpenSSH per-connection server daemon (10.0.0.1:39678). Feb 13 20:18:02.733465 sshd[3136]: Accepted publickey for core from 10.0.0.1 port 39678 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:02.734884 sshd[3136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:02.738642 systemd-logind[1521]: New session 14 of user core. Feb 13 20:18:02.755477 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:18:02.859516 sshd[3136]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:02.862076 systemd[1]: sshd@13-10.0.0.6:22-10.0.0.1:39678.service: Deactivated successfully. Feb 13 20:18:02.864447 systemd-logind[1521]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:18:02.864587 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:18:02.866075 systemd-logind[1521]: Removed session 14. Feb 13 20:18:04.905443 kubelet[2679]: E0213 20:18:04.905385 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:07.874453 systemd[1]: Started sshd@14-10.0.0.6:22-10.0.0.1:39692.service - OpenSSH per-connection server daemon (10.0.0.1:39692). Feb 13 20:18:07.905155 kubelet[2679]: E0213 20:18:07.905113 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:07.906331 kubelet[2679]: E0213 20:18:07.906298 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:18:07.906468 sshd[3152]: Accepted publickey for core from 10.0.0.1 port 39692 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:07.907820 sshd[3152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:07.913114 systemd-logind[1521]: New session 15 of user core. Feb 13 20:18:07.920595 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:18:08.025192 sshd[3152]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:08.028382 systemd[1]: sshd@14-10.0.0.6:22-10.0.0.1:39692.service: Deactivated successfully. Feb 13 20:18:08.031089 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:18:08.031544 systemd-logind[1521]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:18:08.032539 systemd-logind[1521]: Removed session 15. Feb 13 20:18:13.035445 systemd[1]: Started sshd@15-10.0.0.6:22-10.0.0.1:35066.service - OpenSSH per-connection server daemon (10.0.0.1:35066). Feb 13 20:18:13.067057 sshd[3171]: Accepted publickey for core from 10.0.0.1 port 35066 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:13.068180 sshd[3171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:13.072175 systemd-logind[1521]: New session 16 of user core. Feb 13 20:18:13.080454 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:18:13.180810 sshd[3171]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:13.183258 systemd[1]: sshd@15-10.0.0.6:22-10.0.0.1:35066.service: Deactivated successfully. Feb 13 20:18:13.185782 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:18:13.186420 systemd-logind[1521]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:18:13.187329 systemd-logind[1521]: Removed session 16. Feb 13 20:18:13.905708 kubelet[2679]: E0213 20:18:13.905667 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:18.195428 systemd[1]: Started sshd@16-10.0.0.6:22-10.0.0.1:35074.service - OpenSSH per-connection server daemon (10.0.0.1:35074). Feb 13 20:18:18.228231 sshd[3187]: Accepted publickey for core from 10.0.0.1 port 35074 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:18.229341 sshd[3187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:18.233159 systemd-logind[1521]: New session 17 of user core. Feb 13 20:18:18.250534 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:18:18.359559 sshd[3187]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:18.362736 systemd[1]: sshd@16-10.0.0.6:22-10.0.0.1:35074.service: Deactivated successfully. Feb 13 20:18:18.364603 systemd-logind[1521]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:18:18.364681 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:18:18.365520 systemd-logind[1521]: Removed session 17. Feb 13 20:18:18.906279 kubelet[2679]: E0213 20:18:18.906176 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:18.907632 kubelet[2679]: E0213 20:18:18.907590 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:18:19.905583 kubelet[2679]: E0213 20:18:19.905478 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:23.370441 systemd[1]: Started sshd@17-10.0.0.6:22-10.0.0.1:51866.service - OpenSSH per-connection server daemon (10.0.0.1:51866). Feb 13 20:18:23.403022 sshd[3205]: Accepted publickey for core from 10.0.0.1 port 51866 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:23.404178 sshd[3205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:23.407728 systemd-logind[1521]: New session 18 of user core. Feb 13 20:18:23.417538 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:18:23.523668 sshd[3205]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:23.526824 systemd[1]: sshd@17-10.0.0.6:22-10.0.0.1:51866.service: Deactivated successfully. Feb 13 20:18:23.528712 systemd-logind[1521]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:18:23.528800 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:18:23.529606 systemd-logind[1521]: Removed session 18. Feb 13 20:18:25.905374 kubelet[2679]: E0213 20:18:25.905273 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:28.534440 systemd[1]: Started sshd@18-10.0.0.6:22-10.0.0.1:51874.service - OpenSSH per-connection server daemon (10.0.0.1:51874). Feb 13 20:18:28.566096 sshd[3222]: Accepted publickey for core from 10.0.0.1 port 51874 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:28.567256 sshd[3222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:28.571156 systemd-logind[1521]: New session 19 of user core. Feb 13 20:18:28.581532 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:18:28.685963 sshd[3222]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:28.688452 systemd[1]: sshd@18-10.0.0.6:22-10.0.0.1:51874.service: Deactivated successfully. Feb 13 20:18:28.690952 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:18:28.691000 systemd-logind[1521]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:18:28.693740 systemd-logind[1521]: Removed session 19. Feb 13 20:18:33.703543 systemd[1]: Started sshd@19-10.0.0.6:22-10.0.0.1:45292.service - OpenSSH per-connection server daemon (10.0.0.1:45292). Feb 13 20:18:33.736077 sshd[3238]: Accepted publickey for core from 10.0.0.1 port 45292 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:33.737262 sshd[3238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:33.740487 systemd-logind[1521]: New session 20 of user core. Feb 13 20:18:33.750557 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:18:33.852063 sshd[3238]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:33.855156 systemd[1]: sshd@19-10.0.0.6:22-10.0.0.1:45292.service: Deactivated successfully. Feb 13 20:18:33.856974 systemd-logind[1521]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:18:33.857031 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:18:33.857941 systemd-logind[1521]: Removed session 20. Feb 13 20:18:33.906002 kubelet[2679]: E0213 20:18:33.905969 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:33.907861 kubelet[2679]: E0213 20:18:33.907563 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:18:38.864441 systemd[1]: Started sshd@20-10.0.0.6:22-10.0.0.1:45298.service - OpenSSH per-connection server daemon (10.0.0.1:45298). Feb 13 20:18:38.896479 sshd[3255]: Accepted publickey for core from 10.0.0.1 port 45298 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:38.897638 sshd[3255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:38.901350 systemd-logind[1521]: New session 21 of user core. Feb 13 20:18:38.910439 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:18:39.015781 sshd[3255]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:39.018391 systemd[1]: sshd@20-10.0.0.6:22-10.0.0.1:45298.service: Deactivated successfully. Feb 13 20:18:39.020798 systemd-logind[1521]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:18:39.021327 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:18:39.022343 systemd-logind[1521]: Removed session 21. Feb 13 20:18:44.034429 systemd[1]: Started sshd@21-10.0.0.6:22-10.0.0.1:40398.service - OpenSSH per-connection server daemon (10.0.0.1:40398). Feb 13 20:18:44.065994 sshd[3274]: Accepted publickey for core from 10.0.0.1 port 40398 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:44.067148 sshd[3274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:44.070420 systemd-logind[1521]: New session 22 of user core. Feb 13 20:18:44.076455 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:18:44.181035 sshd[3274]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:44.184299 systemd[1]: sshd@21-10.0.0.6:22-10.0.0.1:40398.service: Deactivated successfully. Feb 13 20:18:44.186610 systemd-logind[1521]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:18:44.187374 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:18:44.188265 systemd-logind[1521]: Removed session 22. Feb 13 20:18:48.904773 kubelet[2679]: E0213 20:18:48.904734 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:48.905605 containerd[1539]: time="2025-02-13T20:18:48.905564785Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:18:49.192475 systemd[1]: Started sshd@22-10.0.0.6:22-10.0.0.1:40402.service - OpenSSH per-connection server daemon (10.0.0.1:40402). Feb 13 20:18:49.225018 sshd[3291]: Accepted publickey for core from 10.0.0.1 port 40402 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:49.226189 sshd[3291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:49.229853 systemd-logind[1521]: New session 23 of user core. Feb 13 20:18:49.238593 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:18:49.341418 sshd[3291]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:49.344009 systemd-logind[1521]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:18:49.344247 systemd[1]: sshd@22-10.0.0.6:22-10.0.0.1:40402.service: Deactivated successfully. Feb 13 20:18:49.347049 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:18:49.348087 systemd-logind[1521]: Removed session 23. Feb 13 20:18:50.018233 containerd[1539]: time="2025-02-13T20:18:50.018085049Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:18:50.018233 containerd[1539]: time="2025-02-13T20:18:50.018196609Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:18:50.018651 kubelet[2679]: E0213 20:18:50.018299 2679 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:18:50.018651 kubelet[2679]: E0213 20:18:50.018353 2679 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:18:50.018902 kubelet[2679]: E0213 20:18:50.018441 2679 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdrbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-ftfqg_kube-flannel(36c00cd4-9622-461d-ac3c-90892608fdc2): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:18:50.018959 kubelet[2679]: E0213 20:18:50.018471 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:18:54.355631 systemd[1]: Started sshd@23-10.0.0.6:22-10.0.0.1:48926.service - OpenSSH per-connection server daemon (10.0.0.1:48926). Feb 13 20:18:54.387302 sshd[3308]: Accepted publickey for core from 10.0.0.1 port 48926 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:54.388507 sshd[3308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:54.392359 systemd-logind[1521]: New session 24 of user core. Feb 13 20:18:54.397459 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:18:54.504023 sshd[3308]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:54.507169 systemd[1]: sshd@23-10.0.0.6:22-10.0.0.1:48926.service: Deactivated successfully. Feb 13 20:18:54.509197 systemd-logind[1521]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:18:54.509306 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:18:54.510550 systemd-logind[1521]: Removed session 24. Feb 13 20:18:55.910203 kubelet[2679]: E0213 20:18:55.910167 2679 kubelet_node_status.go:456] "Node not becoming ready in time after startup" Feb 13 20:18:55.969109 kubelet[2679]: E0213 20:18:55.969059 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:18:59.513437 systemd[1]: Started sshd@24-10.0.0.6:22-10.0.0.1:48936.service - OpenSSH per-connection server daemon (10.0.0.1:48936). Feb 13 20:18:59.545175 sshd[3326]: Accepted publickey for core from 10.0.0.1 port 48936 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:59.546461 sshd[3326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:59.549894 systemd-logind[1521]: New session 25 of user core. Feb 13 20:18:59.557431 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:18:59.667775 sshd[3326]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:59.672588 systemd[1]: sshd@24-10.0.0.6:22-10.0.0.1:48936.service: Deactivated successfully. Feb 13 20:18:59.674512 systemd-logind[1521]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:18:59.674550 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:18:59.675761 systemd-logind[1521]: Removed session 25. Feb 13 20:19:00.970009 kubelet[2679]: E0213 20:19:00.969966 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:02.904867 kubelet[2679]: E0213 20:19:02.904804 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:02.905781 kubelet[2679]: E0213 20:19:02.905563 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:19:04.681420 systemd[1]: Started sshd@25-10.0.0.6:22-10.0.0.1:58540.service - OpenSSH per-connection server daemon (10.0.0.1:58540). Feb 13 20:19:04.714256 sshd[3343]: Accepted publickey for core from 10.0.0.1 port 58540 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:04.715353 sshd[3343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:04.719418 systemd-logind[1521]: New session 26 of user core. Feb 13 20:19:04.726423 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:19:04.828908 sshd[3343]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:04.831946 systemd[1]: sshd@25-10.0.0.6:22-10.0.0.1:58540.service: Deactivated successfully. Feb 13 20:19:04.833774 systemd-logind[1521]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:19:04.833918 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:19:04.835139 systemd-logind[1521]: Removed session 26. Feb 13 20:19:05.971176 kubelet[2679]: E0213 20:19:05.971137 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:09.839552 systemd[1]: Started sshd@26-10.0.0.6:22-10.0.0.1:58554.service - OpenSSH per-connection server daemon (10.0.0.1:58554). Feb 13 20:19:09.871095 sshd[3359]: Accepted publickey for core from 10.0.0.1 port 58554 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:09.872239 sshd[3359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:09.875929 systemd-logind[1521]: New session 27 of user core. Feb 13 20:19:09.879535 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:19:09.988272 sshd[3359]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:09.991838 systemd[1]: sshd@26-10.0.0.6:22-10.0.0.1:58554.service: Deactivated successfully. Feb 13 20:19:09.994193 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:19:09.994280 systemd-logind[1521]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:19:09.995288 systemd-logind[1521]: Removed session 27. Feb 13 20:19:10.972140 kubelet[2679]: E0213 20:19:10.972101 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:14.905230 kubelet[2679]: E0213 20:19:14.905156 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:14.906415 kubelet[2679]: E0213 20:19:14.906381 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:19:14.998433 systemd[1]: Started sshd@27-10.0.0.6:22-10.0.0.1:51380.service - OpenSSH per-connection server daemon (10.0.0.1:51380). Feb 13 20:19:15.030183 sshd[3378]: Accepted publickey for core from 10.0.0.1 port 51380 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:15.031332 sshd[3378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:15.034605 systemd-logind[1521]: New session 28 of user core. Feb 13 20:19:15.043443 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:19:15.151414 sshd[3378]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:15.154307 systemd[1]: sshd@27-10.0.0.6:22-10.0.0.1:51380.service: Deactivated successfully. Feb 13 20:19:15.156172 systemd-logind[1521]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:19:15.156268 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:19:15.157240 systemd-logind[1521]: Removed session 28. Feb 13 20:19:15.972711 kubelet[2679]: E0213 20:19:15.972668 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:20.170508 systemd[1]: Started sshd@28-10.0.0.6:22-10.0.0.1:51388.service - OpenSSH per-connection server daemon (10.0.0.1:51388). Feb 13 20:19:20.202398 sshd[3395]: Accepted publickey for core from 10.0.0.1 port 51388 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:20.203518 sshd[3395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:20.206980 systemd-logind[1521]: New session 29 of user core. Feb 13 20:19:20.216496 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:19:20.328993 sshd[3395]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:20.332371 systemd[1]: sshd@28-10.0.0.6:22-10.0.0.1:51388.service: Deactivated successfully. Feb 13 20:19:20.334278 systemd-logind[1521]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:19:20.334355 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:19:20.335918 systemd-logind[1521]: Removed session 29. Feb 13 20:19:20.973383 kubelet[2679]: E0213 20:19:20.973333 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:25.340503 systemd[1]: Started sshd@29-10.0.0.6:22-10.0.0.1:54012.service - OpenSSH per-connection server daemon (10.0.0.1:54012). Feb 13 20:19:25.373608 sshd[3413]: Accepted publickey for core from 10.0.0.1 port 54012 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:25.374779 sshd[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:25.378985 systemd-logind[1521]: New session 30 of user core. Feb 13 20:19:25.397452 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 20:19:25.501961 sshd[3413]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:25.505047 systemd[1]: sshd@29-10.0.0.6:22-10.0.0.1:54012.service: Deactivated successfully. Feb 13 20:19:25.506848 systemd-logind[1521]: Session 30 logged out. Waiting for processes to exit. Feb 13 20:19:25.506929 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 20:19:25.507932 systemd-logind[1521]: Removed session 30. Feb 13 20:19:25.974325 kubelet[2679]: E0213 20:19:25.974280 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:26.905180 kubelet[2679]: E0213 20:19:26.905136 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:26.906091 kubelet[2679]: E0213 20:19:26.905874 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:19:29.905314 kubelet[2679]: E0213 20:19:29.905275 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:30.513515 systemd[1]: Started sshd@30-10.0.0.6:22-10.0.0.1:54020.service - OpenSSH per-connection server daemon (10.0.0.1:54020). Feb 13 20:19:30.545966 sshd[3431]: Accepted publickey for core from 10.0.0.1 port 54020 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:30.547164 sshd[3431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:30.551109 systemd-logind[1521]: New session 31 of user core. Feb 13 20:19:30.564430 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 20:19:30.672965 sshd[3431]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:30.675863 systemd[1]: sshd@30-10.0.0.6:22-10.0.0.1:54020.service: Deactivated successfully. Feb 13 20:19:30.677692 systemd-logind[1521]: Session 31 logged out. Waiting for processes to exit. Feb 13 20:19:30.677825 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 20:19:30.678551 systemd-logind[1521]: Removed session 31. Feb 13 20:19:30.975363 kubelet[2679]: E0213 20:19:30.975235 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:32.905804 kubelet[2679]: E0213 20:19:32.905708 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:35.684543 systemd[1]: Started sshd@31-10.0.0.6:22-10.0.0.1:50612.service - OpenSSH per-connection server daemon (10.0.0.1:50612). Feb 13 20:19:35.716779 sshd[3448]: Accepted publickey for core from 10.0.0.1 port 50612 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:35.717952 sshd[3448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:35.721990 systemd-logind[1521]: New session 32 of user core. Feb 13 20:19:35.732449 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 20:19:35.837814 sshd[3448]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:35.840302 systemd[1]: sshd@31-10.0.0.6:22-10.0.0.1:50612.service: Deactivated successfully. Feb 13 20:19:35.843440 systemd-logind[1521]: Session 32 logged out. Waiting for processes to exit. Feb 13 20:19:35.843628 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 20:19:35.844416 systemd-logind[1521]: Removed session 32. Feb 13 20:19:35.975999 kubelet[2679]: E0213 20:19:35.975897 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:38.904938 kubelet[2679]: E0213 20:19:38.904896 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:38.905864 kubelet[2679]: E0213 20:19:38.905646 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:19:40.852525 systemd[1]: Started sshd@32-10.0.0.6:22-10.0.0.1:50624.service - OpenSSH per-connection server daemon (10.0.0.1:50624). Feb 13 20:19:40.884153 sshd[3464]: Accepted publickey for core from 10.0.0.1 port 50624 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:40.885337 sshd[3464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:40.889609 systemd-logind[1521]: New session 33 of user core. Feb 13 20:19:40.907438 systemd[1]: Started session-33.scope - Session 33 of User core. Feb 13 20:19:40.976649 kubelet[2679]: E0213 20:19:40.976611 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:41.014176 sshd[3464]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:41.017686 systemd-logind[1521]: Session 33 logged out. Waiting for processes to exit. Feb 13 20:19:41.017844 systemd[1]: sshd@32-10.0.0.6:22-10.0.0.1:50624.service: Deactivated successfully. Feb 13 20:19:41.020163 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 20:19:41.020794 systemd-logind[1521]: Removed session 33. Feb 13 20:19:41.905560 kubelet[2679]: E0213 20:19:41.905464 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:45.977548 kubelet[2679]: E0213 20:19:45.977514 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:46.029535 systemd[1]: Started sshd@33-10.0.0.6:22-10.0.0.1:40642.service - OpenSSH per-connection server daemon (10.0.0.1:40642). Feb 13 20:19:46.061688 sshd[3482]: Accepted publickey for core from 10.0.0.1 port 40642 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:46.063072 sshd[3482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:46.066688 systemd-logind[1521]: New session 34 of user core. Feb 13 20:19:46.073418 systemd[1]: Started session-34.scope - Session 34 of User core. Feb 13 20:19:46.181409 sshd[3482]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:46.184653 systemd[1]: sshd@33-10.0.0.6:22-10.0.0.1:40642.service: Deactivated successfully. Feb 13 20:19:46.186503 systemd-logind[1521]: Session 34 logged out. Waiting for processes to exit. Feb 13 20:19:46.186557 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 20:19:46.188183 systemd-logind[1521]: Removed session 34. Feb 13 20:19:50.905509 kubelet[2679]: E0213 20:19:50.905478 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:50.906445 kubelet[2679]: E0213 20:19:50.906240 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:19:50.979030 kubelet[2679]: E0213 20:19:50.979001 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:51.192458 systemd[1]: Started sshd@34-10.0.0.6:22-10.0.0.1:40658.service - OpenSSH per-connection server daemon (10.0.0.1:40658). Feb 13 20:19:51.224161 sshd[3499]: Accepted publickey for core from 10.0.0.1 port 40658 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:51.225376 sshd[3499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:51.228985 systemd-logind[1521]: New session 35 of user core. Feb 13 20:19:51.236458 systemd[1]: Started session-35.scope - Session 35 of User core. Feb 13 20:19:51.342905 sshd[3499]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:51.345438 systemd[1]: sshd@34-10.0.0.6:22-10.0.0.1:40658.service: Deactivated successfully. Feb 13 20:19:51.348614 systemd-logind[1521]: Session 35 logged out. Waiting for processes to exit. Feb 13 20:19:51.349076 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 20:19:51.349926 systemd-logind[1521]: Removed session 35. Feb 13 20:19:53.905985 kubelet[2679]: E0213 20:19:53.905941 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:55.980201 kubelet[2679]: E0213 20:19:55.980141 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:56.354484 systemd[1]: Started sshd@35-10.0.0.6:22-10.0.0.1:39280.service - OpenSSH per-connection server daemon (10.0.0.1:39280). Feb 13 20:19:56.385958 sshd[3518]: Accepted publickey for core from 10.0.0.1 port 39280 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:56.387065 sshd[3518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:56.390879 systemd-logind[1521]: New session 36 of user core. Feb 13 20:19:56.401431 systemd[1]: Started session-36.scope - Session 36 of User core. Feb 13 20:19:56.509967 sshd[3518]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:56.513030 systemd[1]: sshd@35-10.0.0.6:22-10.0.0.1:39280.service: Deactivated successfully. Feb 13 20:19:56.515081 systemd-logind[1521]: Session 36 logged out. Waiting for processes to exit. Feb 13 20:19:56.515131 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 20:19:56.517844 systemd-logind[1521]: Removed session 36. Feb 13 20:20:00.981517 kubelet[2679]: E0213 20:20:00.981466 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:01.523447 systemd[1]: Started sshd@36-10.0.0.6:22-10.0.0.1:39292.service - OpenSSH per-connection server daemon (10.0.0.1:39292). Feb 13 20:20:01.555566 sshd[3534]: Accepted publickey for core from 10.0.0.1 port 39292 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:01.556698 sshd[3534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:01.560373 systemd-logind[1521]: New session 37 of user core. Feb 13 20:20:01.567461 systemd[1]: Started session-37.scope - Session 37 of User core. Feb 13 20:20:01.674524 sshd[3534]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:01.678038 systemd[1]: sshd@36-10.0.0.6:22-10.0.0.1:39292.service: Deactivated successfully. Feb 13 20:20:01.679951 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 20:20:01.679960 systemd-logind[1521]: Session 37 logged out. Waiting for processes to exit. Feb 13 20:20:01.681381 systemd-logind[1521]: Removed session 37. Feb 13 20:20:02.905353 kubelet[2679]: E0213 20:20:02.905313 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:02.905986 kubelet[2679]: E0213 20:20:02.905948 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:20:05.982092 kubelet[2679]: E0213 20:20:05.982058 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:06.689419 systemd[1]: Started sshd@37-10.0.0.6:22-10.0.0.1:50666.service - OpenSSH per-connection server daemon (10.0.0.1:50666). Feb 13 20:20:06.721076 sshd[3550]: Accepted publickey for core from 10.0.0.1 port 50666 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:06.722187 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:06.726024 systemd-logind[1521]: New session 38 of user core. Feb 13 20:20:06.738438 systemd[1]: Started session-38.scope - Session 38 of User core. Feb 13 20:20:06.844105 sshd[3550]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:06.847370 systemd[1]: sshd@37-10.0.0.6:22-10.0.0.1:50666.service: Deactivated successfully. Feb 13 20:20:06.849399 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 20:20:06.850085 systemd-logind[1521]: Session 38 logged out. Waiting for processes to exit. Feb 13 20:20:06.850996 systemd-logind[1521]: Removed session 38. Feb 13 20:20:10.982860 kubelet[2679]: E0213 20:20:10.982822 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:11.853424 systemd[1]: Started sshd@38-10.0.0.6:22-10.0.0.1:50678.service - OpenSSH per-connection server daemon (10.0.0.1:50678). Feb 13 20:20:11.885346 sshd[3569]: Accepted publickey for core from 10.0.0.1 port 50678 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:11.886462 sshd[3569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:11.889974 systemd-logind[1521]: New session 39 of user core. Feb 13 20:20:11.902515 systemd[1]: Started session-39.scope - Session 39 of User core. Feb 13 20:20:12.008924 sshd[3569]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:12.012194 systemd[1]: sshd@38-10.0.0.6:22-10.0.0.1:50678.service: Deactivated successfully. Feb 13 20:20:12.014071 systemd-logind[1521]: Session 39 logged out. Waiting for processes to exit. Feb 13 20:20:12.014149 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 20:20:12.015145 systemd-logind[1521]: Removed session 39. Feb 13 20:20:15.905134 kubelet[2679]: E0213 20:20:15.904900 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:15.905978 containerd[1539]: time="2025-02-13T20:20:15.905938803Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:20:15.984525 kubelet[2679]: E0213 20:20:15.984481 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:17.023463 systemd[1]: Started sshd@39-10.0.0.6:22-10.0.0.1:39896.service - OpenSSH per-connection server daemon (10.0.0.1:39896). Feb 13 20:20:17.028777 containerd[1539]: time="2025-02-13T20:20:17.028698638Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:20:17.029130 containerd[1539]: time="2025-02-13T20:20:17.028781759Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:20:17.029170 kubelet[2679]: E0213 20:20:17.028920 2679 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:20:17.029170 kubelet[2679]: E0213 20:20:17.028966 2679 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:20:17.029438 kubelet[2679]: E0213 20:20:17.029044 2679 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdrbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-ftfqg_kube-flannel(36c00cd4-9622-461d-ac3c-90892608fdc2): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:20:17.029505 kubelet[2679]: E0213 20:20:17.029071 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:20:17.055846 sshd[3585]: Accepted publickey for core from 10.0.0.1 port 39896 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:17.056951 sshd[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:17.060580 systemd-logind[1521]: New session 40 of user core. Feb 13 20:20:17.073442 systemd[1]: Started session-40.scope - Session 40 of User core. Feb 13 20:20:17.178536 sshd[3585]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:17.181529 systemd[1]: sshd@39-10.0.0.6:22-10.0.0.1:39896.service: Deactivated successfully. Feb 13 20:20:17.183380 systemd-logind[1521]: Session 40 logged out. Waiting for processes to exit. Feb 13 20:20:17.183447 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 20:20:17.184395 systemd-logind[1521]: Removed session 40. Feb 13 20:20:20.986146 kubelet[2679]: E0213 20:20:20.986093 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:22.191431 systemd[1]: Started sshd@40-10.0.0.6:22-10.0.0.1:39912.service - OpenSSH per-connection server daemon (10.0.0.1:39912). Feb 13 20:20:22.223394 sshd[3601]: Accepted publickey for core from 10.0.0.1 port 39912 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:22.224654 sshd[3601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:22.228095 systemd-logind[1521]: New session 41 of user core. Feb 13 20:20:22.239444 systemd[1]: Started session-41.scope - Session 41 of User core. Feb 13 20:20:22.345418 sshd[3601]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:22.355477 systemd[1]: Started sshd@41-10.0.0.6:22-10.0.0.1:39914.service - OpenSSH per-connection server daemon (10.0.0.1:39914). Feb 13 20:20:22.355847 systemd[1]: sshd@40-10.0.0.6:22-10.0.0.1:39912.service: Deactivated successfully. Feb 13 20:20:22.358988 systemd-logind[1521]: Session 41 logged out. Waiting for processes to exit. Feb 13 20:20:22.359045 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 20:20:22.360638 systemd-logind[1521]: Removed session 41. Feb 13 20:20:22.387298 sshd[3615]: Accepted publickey for core from 10.0.0.1 port 39914 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:22.388491 sshd[3615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:22.392269 systemd-logind[1521]: New session 42 of user core. Feb 13 20:20:22.404530 systemd[1]: Started session-42.scope - Session 42 of User core. Feb 13 20:20:22.543069 sshd[3615]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:22.547099 systemd[1]: Started sshd@42-10.0.0.6:22-10.0.0.1:45688.service - OpenSSH per-connection server daemon (10.0.0.1:45688). Feb 13 20:20:22.549728 systemd[1]: sshd@41-10.0.0.6:22-10.0.0.1:39914.service: Deactivated successfully. Feb 13 20:20:22.553339 systemd-logind[1521]: Session 42 logged out. Waiting for processes to exit. Feb 13 20:20:22.553777 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 20:20:22.571295 systemd-logind[1521]: Removed session 42. Feb 13 20:20:22.604322 sshd[3628]: Accepted publickey for core from 10.0.0.1 port 45688 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:22.605500 sshd[3628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:22.609097 systemd-logind[1521]: New session 43 of user core. Feb 13 20:20:22.618516 systemd[1]: Started session-43.scope - Session 43 of User core. Feb 13 20:20:22.724397 sshd[3628]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:22.727566 systemd[1]: sshd@42-10.0.0.6:22-10.0.0.1:45688.service: Deactivated successfully. Feb 13 20:20:22.729849 systemd-logind[1521]: Session 43 logged out. Waiting for processes to exit. Feb 13 20:20:22.729921 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 20:20:22.731778 systemd-logind[1521]: Removed session 43. Feb 13 20:20:25.987061 kubelet[2679]: E0213 20:20:25.987022 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:27.733554 systemd[1]: Started sshd@43-10.0.0.6:22-10.0.0.1:45694.service - OpenSSH per-connection server daemon (10.0.0.1:45694). Feb 13 20:20:27.765930 sshd[3647]: Accepted publickey for core from 10.0.0.1 port 45694 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:27.767080 sshd[3647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:27.770626 systemd-logind[1521]: New session 44 of user core. Feb 13 20:20:27.782486 systemd[1]: Started session-44.scope - Session 44 of User core. Feb 13 20:20:27.888589 sshd[3647]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:27.891775 systemd[1]: sshd@43-10.0.0.6:22-10.0.0.1:45694.service: Deactivated successfully. Feb 13 20:20:27.893715 systemd-logind[1521]: Session 44 logged out. Waiting for processes to exit. Feb 13 20:20:27.893801 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 20:20:27.894784 systemd-logind[1521]: Removed session 44. Feb 13 20:20:29.905665 kubelet[2679]: E0213 20:20:29.905617 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:29.906459 kubelet[2679]: E0213 20:20:29.906429 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:20:30.988729 kubelet[2679]: E0213 20:20:30.988686 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:32.904485 systemd[1]: Started sshd@44-10.0.0.6:22-10.0.0.1:60064.service - OpenSSH per-connection server daemon (10.0.0.1:60064). Feb 13 20:20:32.936329 sshd[3663]: Accepted publickey for core from 10.0.0.1 port 60064 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:32.937521 sshd[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:32.941275 systemd-logind[1521]: New session 45 of user core. Feb 13 20:20:32.951434 systemd[1]: Started session-45.scope - Session 45 of User core. Feb 13 20:20:33.068447 sshd[3663]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:33.071774 systemd[1]: sshd@44-10.0.0.6:22-10.0.0.1:60064.service: Deactivated successfully. Feb 13 20:20:33.073730 systemd-logind[1521]: Session 45 logged out. Waiting for processes to exit. Feb 13 20:20:33.074103 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 20:20:33.075008 systemd-logind[1521]: Removed session 45. Feb 13 20:20:35.989426 kubelet[2679]: E0213 20:20:35.989391 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:38.078583 systemd[1]: Started sshd@45-10.0.0.6:22-10.0.0.1:60076.service - OpenSSH per-connection server daemon (10.0.0.1:60076). Feb 13 20:20:38.110667 sshd[3678]: Accepted publickey for core from 10.0.0.1 port 60076 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:38.111804 sshd[3678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:38.115275 systemd-logind[1521]: New session 46 of user core. Feb 13 20:20:38.126452 systemd[1]: Started session-46.scope - Session 46 of User core. Feb 13 20:20:38.230716 sshd[3678]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:38.234042 systemd[1]: sshd@45-10.0.0.6:22-10.0.0.1:60076.service: Deactivated successfully. Feb 13 20:20:38.235868 systemd-logind[1521]: Session 46 logged out. Waiting for processes to exit. Feb 13 20:20:38.235989 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 20:20:38.236915 systemd-logind[1521]: Removed session 46. Feb 13 20:20:39.905985 kubelet[2679]: E0213 20:20:39.905897 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:40.991047 kubelet[2679]: E0213 20:20:40.991015 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:43.242424 systemd[1]: Started sshd@46-10.0.0.6:22-10.0.0.1:36530.service - OpenSSH per-connection server daemon (10.0.0.1:36530). Feb 13 20:20:43.275223 sshd[3695]: Accepted publickey for core from 10.0.0.1 port 36530 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:43.276379 sshd[3695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:43.280702 systemd-logind[1521]: New session 47 of user core. Feb 13 20:20:43.290576 systemd[1]: Started session-47.scope - Session 47 of User core. Feb 13 20:20:43.397739 sshd[3695]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:43.400855 systemd[1]: sshd@46-10.0.0.6:22-10.0.0.1:36530.service: Deactivated successfully. Feb 13 20:20:43.402708 systemd-logind[1521]: Session 47 logged out. Waiting for processes to exit. Feb 13 20:20:43.402792 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 20:20:43.403715 systemd-logind[1521]: Removed session 47. Feb 13 20:20:43.906195 kubelet[2679]: E0213 20:20:43.906163 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:43.907093 kubelet[2679]: E0213 20:20:43.906879 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:20:45.991643 kubelet[2679]: E0213 20:20:45.991587 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:47.905342 kubelet[2679]: E0213 20:20:47.905252 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:48.411445 systemd[1]: Started sshd@47-10.0.0.6:22-10.0.0.1:36532.service - OpenSSH per-connection server daemon (10.0.0.1:36532). Feb 13 20:20:48.443207 sshd[3711]: Accepted publickey for core from 10.0.0.1 port 36532 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:48.444336 sshd[3711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:48.448017 systemd-logind[1521]: New session 48 of user core. Feb 13 20:20:48.461465 systemd[1]: Started session-48.scope - Session 48 of User core. Feb 13 20:20:48.567584 sshd[3711]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:48.570001 systemd[1]: sshd@47-10.0.0.6:22-10.0.0.1:36532.service: Deactivated successfully. Feb 13 20:20:48.572681 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 20:20:48.573318 systemd-logind[1521]: Session 48 logged out. Waiting for processes to exit. Feb 13 20:20:48.574293 systemd-logind[1521]: Removed session 48. Feb 13 20:20:50.992757 kubelet[2679]: E0213 20:20:50.992664 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:53.586415 systemd[1]: Started sshd@48-10.0.0.6:22-10.0.0.1:45106.service - OpenSSH per-connection server daemon (10.0.0.1:45106). Feb 13 20:20:53.621684 sshd[3726]: Accepted publickey for core from 10.0.0.1 port 45106 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:53.622808 sshd[3726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:53.626699 systemd-logind[1521]: New session 49 of user core. Feb 13 20:20:53.635448 systemd[1]: Started session-49.scope - Session 49 of User core. Feb 13 20:20:53.740648 sshd[3726]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:53.743545 systemd[1]: sshd@48-10.0.0.6:22-10.0.0.1:45106.service: Deactivated successfully. Feb 13 20:20:53.745444 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 20:20:53.745446 systemd-logind[1521]: Session 49 logged out. Waiting for processes to exit. Feb 13 20:20:53.746716 systemd-logind[1521]: Removed session 49. Feb 13 20:20:54.905774 kubelet[2679]: E0213 20:20:54.905694 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:54.906427 kubelet[2679]: E0213 20:20:54.906402 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:20:55.994068 kubelet[2679]: E0213 20:20:55.994008 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:58.753418 systemd[1]: Started sshd@49-10.0.0.6:22-10.0.0.1:45112.service - OpenSSH per-connection server daemon (10.0.0.1:45112). Feb 13 20:20:58.785075 sshd[3743]: Accepted publickey for core from 10.0.0.1 port 45112 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:58.786252 sshd[3743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:58.790167 systemd-logind[1521]: New session 50 of user core. Feb 13 20:20:58.804467 systemd[1]: Started session-50.scope - Session 50 of User core. Feb 13 20:20:58.912010 sshd[3743]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:58.915180 systemd[1]: sshd@49-10.0.0.6:22-10.0.0.1:45112.service: Deactivated successfully. Feb 13 20:20:58.917155 systemd-logind[1521]: Session 50 logged out. Waiting for processes to exit. Feb 13 20:20:58.917235 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 20:20:58.918135 systemd-logind[1521]: Removed session 50. Feb 13 20:21:00.905972 kubelet[2679]: E0213 20:21:00.905888 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:00.994725 kubelet[2679]: E0213 20:21:00.994697 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:03.928427 systemd[1]: Started sshd@50-10.0.0.6:22-10.0.0.1:54266.service - OpenSSH per-connection server daemon (10.0.0.1:54266). Feb 13 20:21:03.960329 sshd[3758]: Accepted publickey for core from 10.0.0.1 port 54266 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:03.961481 sshd[3758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:03.965066 systemd-logind[1521]: New session 51 of user core. Feb 13 20:21:03.979433 systemd[1]: Started session-51.scope - Session 51 of User core. Feb 13 20:21:04.083555 sshd[3758]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:04.086574 systemd[1]: sshd@50-10.0.0.6:22-10.0.0.1:54266.service: Deactivated successfully. Feb 13 20:21:04.088614 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 20:21:04.088627 systemd-logind[1521]: Session 51 logged out. Waiting for processes to exit. Feb 13 20:21:04.089940 systemd-logind[1521]: Removed session 51. Feb 13 20:21:05.996250 kubelet[2679]: E0213 20:21:05.996164 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:09.094420 systemd[1]: Started sshd@51-10.0.0.6:22-10.0.0.1:54278.service - OpenSSH per-connection server daemon (10.0.0.1:54278). Feb 13 20:21:09.126462 sshd[3773]: Accepted publickey for core from 10.0.0.1 port 54278 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:09.127604 sshd[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:09.130992 systemd-logind[1521]: New session 52 of user core. Feb 13 20:21:09.142426 systemd[1]: Started session-52.scope - Session 52 of User core. Feb 13 20:21:09.246598 sshd[3773]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:09.249718 systemd[1]: sshd@51-10.0.0.6:22-10.0.0.1:54278.service: Deactivated successfully. Feb 13 20:21:09.251656 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 20:21:09.251671 systemd-logind[1521]: Session 52 logged out. Waiting for processes to exit. Feb 13 20:21:09.253426 systemd-logind[1521]: Removed session 52. Feb 13 20:21:09.905405 kubelet[2679]: E0213 20:21:09.905366 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:09.906354 kubelet[2679]: E0213 20:21:09.906325 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:21:10.997787 kubelet[2679]: E0213 20:21:10.997748 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:14.259418 systemd[1]: Started sshd@52-10.0.0.6:22-10.0.0.1:39722.service - OpenSSH per-connection server daemon (10.0.0.1:39722). Feb 13 20:21:14.291395 sshd[3791]: Accepted publickey for core from 10.0.0.1 port 39722 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:14.292547 sshd[3791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:14.296029 systemd-logind[1521]: New session 53 of user core. Feb 13 20:21:14.303429 systemd[1]: Started session-53.scope - Session 53 of User core. Feb 13 20:21:14.405342 sshd[3791]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:14.407865 systemd[1]: sshd@52-10.0.0.6:22-10.0.0.1:39722.service: Deactivated successfully. Feb 13 20:21:14.410402 systemd-logind[1521]: Session 53 logged out. Waiting for processes to exit. Feb 13 20:21:14.410497 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 20:21:14.411997 systemd-logind[1521]: Removed session 53. Feb 13 20:21:15.998338 kubelet[2679]: E0213 20:21:15.998293 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:19.416419 systemd[1]: Started sshd@53-10.0.0.6:22-10.0.0.1:39732.service - OpenSSH per-connection server daemon (10.0.0.1:39732). Feb 13 20:21:19.449713 sshd[3806]: Accepted publickey for core from 10.0.0.1 port 39732 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:19.451024 sshd[3806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:19.454839 systemd-logind[1521]: New session 54 of user core. Feb 13 20:21:19.464442 systemd[1]: Started session-54.scope - Session 54 of User core. Feb 13 20:21:19.567620 sshd[3806]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:19.570762 systemd[1]: sshd@53-10.0.0.6:22-10.0.0.1:39732.service: Deactivated successfully. Feb 13 20:21:19.572751 systemd-logind[1521]: Session 54 logged out. Waiting for processes to exit. Feb 13 20:21:19.572837 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 20:21:19.573994 systemd-logind[1521]: Removed session 54. Feb 13 20:21:20.905284 kubelet[2679]: E0213 20:21:20.905207 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:20.999652 kubelet[2679]: E0213 20:21:20.999599 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:21.905739 kubelet[2679]: E0213 20:21:21.905698 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:21.906361 kubelet[2679]: E0213 20:21:21.906330 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:21:24.582536 systemd[1]: Started sshd@54-10.0.0.6:22-10.0.0.1:43272.service - OpenSSH per-connection server daemon (10.0.0.1:43272). Feb 13 20:21:24.614230 sshd[3822]: Accepted publickey for core from 10.0.0.1 port 43272 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:24.615366 sshd[3822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:24.618977 systemd-logind[1521]: New session 55 of user core. Feb 13 20:21:24.626429 systemd[1]: Started session-55.scope - Session 55 of User core. Feb 13 20:21:24.731331 sshd[3822]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:24.734796 systemd[1]: sshd@54-10.0.0.6:22-10.0.0.1:43272.service: Deactivated successfully. Feb 13 20:21:24.736769 systemd-logind[1521]: Session 55 logged out. Waiting for processes to exit. Feb 13 20:21:24.736773 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 20:21:24.738610 systemd-logind[1521]: Removed session 55. Feb 13 20:21:26.001047 kubelet[2679]: E0213 20:21:26.001010 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:29.745442 systemd[1]: Started sshd@55-10.0.0.6:22-10.0.0.1:43274.service - OpenSSH per-connection server daemon (10.0.0.1:43274). Feb 13 20:21:29.777374 sshd[3838]: Accepted publickey for core from 10.0.0.1 port 43274 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:29.778531 sshd[3838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:29.782177 systemd-logind[1521]: New session 56 of user core. Feb 13 20:21:29.795457 systemd[1]: Started session-56.scope - Session 56 of User core. Feb 13 20:21:29.899727 sshd[3838]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:29.902613 systemd-logind[1521]: Session 56 logged out. Waiting for processes to exit. Feb 13 20:21:29.902744 systemd[1]: sshd@55-10.0.0.6:22-10.0.0.1:43274.service: Deactivated successfully. Feb 13 20:21:29.905118 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 20:21:29.906198 systemd-logind[1521]: Removed session 56. Feb 13 20:21:31.002226 kubelet[2679]: E0213 20:21:31.002171 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:32.905778 kubelet[2679]: E0213 20:21:32.905738 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:32.906699 kubelet[2679]: E0213 20:21:32.906476 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:21:34.910469 systemd[1]: Started sshd@56-10.0.0.6:22-10.0.0.1:56506.service - OpenSSH per-connection server daemon (10.0.0.1:56506). Feb 13 20:21:34.942840 sshd[3853]: Accepted publickey for core from 10.0.0.1 port 56506 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:34.944098 sshd[3853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:34.947977 systemd-logind[1521]: New session 57 of user core. Feb 13 20:21:34.959619 systemd[1]: Started session-57.scope - Session 57 of User core. Feb 13 20:21:35.071874 sshd[3853]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:35.076413 systemd[1]: sshd@56-10.0.0.6:22-10.0.0.1:56506.service: Deactivated successfully. Feb 13 20:21:35.078052 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 20:21:35.078768 systemd-logind[1521]: Session 57 logged out. Waiting for processes to exit. Feb 13 20:21:35.080405 systemd-logind[1521]: Removed session 57. Feb 13 20:21:36.003278 kubelet[2679]: E0213 20:21:36.003230 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:40.080441 systemd[1]: Started sshd@57-10.0.0.6:22-10.0.0.1:56512.service - OpenSSH per-connection server daemon (10.0.0.1:56512). Feb 13 20:21:40.112335 sshd[3869]: Accepted publickey for core from 10.0.0.1 port 56512 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:40.113574 sshd[3869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:40.117204 systemd-logind[1521]: New session 58 of user core. Feb 13 20:21:40.127441 systemd[1]: Started session-58.scope - Session 58 of User core. Feb 13 20:21:40.232678 sshd[3869]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:40.235132 systemd[1]: sshd@57-10.0.0.6:22-10.0.0.1:56512.service: Deactivated successfully. Feb 13 20:21:40.237502 systemd-logind[1521]: Session 58 logged out. Waiting for processes to exit. Feb 13 20:21:40.237673 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 20:21:40.238842 systemd-logind[1521]: Removed session 58. Feb 13 20:21:41.004711 kubelet[2679]: E0213 20:21:41.004673 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:45.252422 systemd[1]: Started sshd@58-10.0.0.6:22-10.0.0.1:43564.service - OpenSSH per-connection server daemon (10.0.0.1:43564). Feb 13 20:21:45.284919 sshd[3887]: Accepted publickey for core from 10.0.0.1 port 43564 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:45.286057 sshd[3887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:45.289336 systemd-logind[1521]: New session 59 of user core. Feb 13 20:21:45.305423 systemd[1]: Started session-59.scope - Session 59 of User core. Feb 13 20:21:45.409227 sshd[3887]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:45.412490 systemd[1]: sshd@58-10.0.0.6:22-10.0.0.1:43564.service: Deactivated successfully. Feb 13 20:21:45.414327 systemd-logind[1521]: Session 59 logged out. Waiting for processes to exit. Feb 13 20:21:45.414398 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 20:21:45.415532 systemd-logind[1521]: Removed session 59. Feb 13 20:21:45.906754 kubelet[2679]: E0213 20:21:45.906726 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:45.907386 kubelet[2679]: E0213 20:21:45.907288 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:21:46.006673 kubelet[2679]: E0213 20:21:46.006635 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:50.422425 systemd[1]: Started sshd@59-10.0.0.6:22-10.0.0.1:43572.service - OpenSSH per-connection server daemon (10.0.0.1:43572). Feb 13 20:21:50.454382 sshd[3902]: Accepted publickey for core from 10.0.0.1 port 43572 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:50.455497 sshd[3902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:50.458636 systemd-logind[1521]: New session 60 of user core. Feb 13 20:21:50.471433 systemd[1]: Started session-60.scope - Session 60 of User core. Feb 13 20:21:50.576862 sshd[3902]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:50.579360 systemd[1]: sshd@59-10.0.0.6:22-10.0.0.1:43572.service: Deactivated successfully. Feb 13 20:21:50.582115 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 20:21:50.582318 systemd-logind[1521]: Session 60 logged out. Waiting for processes to exit. Feb 13 20:21:50.584060 systemd-logind[1521]: Removed session 60. Feb 13 20:21:51.007518 kubelet[2679]: E0213 20:21:51.007467 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:55.594524 systemd[1]: Started sshd@60-10.0.0.6:22-10.0.0.1:38352.service - OpenSSH per-connection server daemon (10.0.0.1:38352). Feb 13 20:21:55.626066 sshd[3917]: Accepted publickey for core from 10.0.0.1 port 38352 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:55.627316 sshd[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:55.630929 systemd-logind[1521]: New session 61 of user core. Feb 13 20:21:55.645428 systemd[1]: Started session-61.scope - Session 61 of User core. Feb 13 20:21:55.748699 sshd[3917]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:55.751574 systemd[1]: sshd@60-10.0.0.6:22-10.0.0.1:38352.service: Deactivated successfully. Feb 13 20:21:55.753432 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 20:21:55.753444 systemd-logind[1521]: Session 61 logged out. Waiting for processes to exit. Feb 13 20:21:55.754823 systemd-logind[1521]: Removed session 61. Feb 13 20:21:56.008177 kubelet[2679]: E0213 20:21:56.008070 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:56.089779 update_engine[1527]: I20250213 20:21:56.089704 1527 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 20:21:56.089779 update_engine[1527]: I20250213 20:21:56.089769 1527 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 20:21:56.090165 update_engine[1527]: I20250213 20:21:56.090045 1527 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 20:21:56.090524 update_engine[1527]: I20250213 20:21:56.090415 1527 omaha_request_params.cc:62] Current group set to lts Feb 13 20:21:56.090524 update_engine[1527]: I20250213 20:21:56.090502 1527 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 20:21:56.090524 update_engine[1527]: I20250213 20:21:56.090512 1527 update_attempter.cc:643] Scheduling an action processor start. Feb 13 20:21:56.090608 update_engine[1527]: I20250213 20:21:56.090528 1527 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:21:56.090608 update_engine[1527]: I20250213 20:21:56.090552 1527 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 20:21:56.090608 update_engine[1527]: I20250213 20:21:56.090600 1527 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:21:56.090664 update_engine[1527]: I20250213 20:21:56.090610 1527 omaha_request_action.cc:272] Request: Feb 13 20:21:56.090664 update_engine[1527]: Feb 13 20:21:56.090664 update_engine[1527]: Feb 13 20:21:56.090664 update_engine[1527]: Feb 13 20:21:56.090664 update_engine[1527]: Feb 13 20:21:56.090664 update_engine[1527]: Feb 13 20:21:56.090664 update_engine[1527]: Feb 13 20:21:56.090664 update_engine[1527]: Feb 13 20:21:56.090664 update_engine[1527]: Feb 13 20:21:56.090664 update_engine[1527]: I20250213 20:21:56.090615 1527 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:21:56.091019 locksmithd[1556]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 20:21:56.091726 update_engine[1527]: I20250213 20:21:56.091688 1527 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:21:56.091941 update_engine[1527]: I20250213 20:21:56.091912 1527 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:21:56.099886 update_engine[1527]: E20250213 20:21:56.099838 1527 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:21:56.099960 update_engine[1527]: I20250213 20:21:56.099933 1527 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 20:21:58.905306 kubelet[2679]: E0213 20:21:58.905266 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:58.905904 kubelet[2679]: E0213 20:21:58.905867 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:22:00.761428 systemd[1]: Started sshd@61-10.0.0.6:22-10.0.0.1:38368.service - OpenSSH per-connection server daemon (10.0.0.1:38368). Feb 13 20:22:00.793200 sshd[3936]: Accepted publickey for core from 10.0.0.1 port 38368 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:00.794427 sshd[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:00.799037 systemd-logind[1521]: New session 62 of user core. Feb 13 20:22:00.808502 systemd[1]: Started session-62.scope - Session 62 of User core. Feb 13 20:22:00.911937 sshd[3936]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:00.914882 systemd[1]: sshd@61-10.0.0.6:22-10.0.0.1:38368.service: Deactivated successfully. Feb 13 20:22:00.916847 systemd-logind[1521]: Session 62 logged out. Waiting for processes to exit. Feb 13 20:22:00.917332 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 20:22:00.918161 systemd-logind[1521]: Removed session 62. Feb 13 20:22:01.009602 kubelet[2679]: E0213 20:22:01.009564 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:04.905692 kubelet[2679]: E0213 20:22:04.905660 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:05.907022 kubelet[2679]: E0213 20:22:05.906994 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:05.926521 systemd[1]: Started sshd@62-10.0.0.6:22-10.0.0.1:38034.service - OpenSSH per-connection server daemon (10.0.0.1:38034). Feb 13 20:22:05.958327 sshd[3955]: Accepted publickey for core from 10.0.0.1 port 38034 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:05.959492 sshd[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:05.963416 systemd-logind[1521]: New session 63 of user core. Feb 13 20:22:05.975457 systemd[1]: Started session-63.scope - Session 63 of User core. Feb 13 20:22:06.011057 kubelet[2679]: E0213 20:22:06.010919 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:06.079630 sshd[3955]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:06.082079 systemd[1]: sshd@62-10.0.0.6:22-10.0.0.1:38034.service: Deactivated successfully. Feb 13 20:22:06.084821 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 20:22:06.085116 systemd-logind[1521]: Session 63 logged out. Waiting for processes to exit. Feb 13 20:22:06.086060 systemd-logind[1521]: Removed session 63. Feb 13 20:22:06.089575 update_engine[1527]: I20250213 20:22:06.089529 1527 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:22:06.089862 update_engine[1527]: I20250213 20:22:06.089733 1527 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:22:06.089913 update_engine[1527]: I20250213 20:22:06.089883 1527 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:22:06.097609 update_engine[1527]: E20250213 20:22:06.097577 1527 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:22:06.097666 update_engine[1527]: I20250213 20:22:06.097628 1527 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 20:22:10.905461 kubelet[2679]: E0213 20:22:10.905398 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:10.906052 kubelet[2679]: E0213 20:22:10.906009 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:22:11.012553 kubelet[2679]: E0213 20:22:11.012504 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:11.095422 systemd[1]: Started sshd@63-10.0.0.6:22-10.0.0.1:38046.service - OpenSSH per-connection server daemon (10.0.0.1:38046). Feb 13 20:22:11.127383 sshd[3970]: Accepted publickey for core from 10.0.0.1 port 38046 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:11.128541 sshd[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:11.132301 systemd-logind[1521]: New session 64 of user core. Feb 13 20:22:11.143421 systemd[1]: Started session-64.scope - Session 64 of User core. Feb 13 20:22:11.248605 sshd[3970]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:11.253478 systemd[1]: sshd@63-10.0.0.6:22-10.0.0.1:38046.service: Deactivated successfully. Feb 13 20:22:11.253815 systemd-logind[1521]: Session 64 logged out. Waiting for processes to exit. Feb 13 20:22:11.255897 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 20:22:11.256736 systemd-logind[1521]: Removed session 64. Feb 13 20:22:16.013315 kubelet[2679]: E0213 20:22:16.013242 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:16.091475 update_engine[1527]: I20250213 20:22:16.089823 1527 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:22:16.091956 update_engine[1527]: I20250213 20:22:16.091917 1527 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:22:16.092125 update_engine[1527]: I20250213 20:22:16.092095 1527 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:22:16.096296 update_engine[1527]: E20250213 20:22:16.096256 1527 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:22:16.096371 update_engine[1527]: I20250213 20:22:16.096315 1527 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 20:22:16.264433 systemd[1]: Started sshd@64-10.0.0.6:22-10.0.0.1:37570.service - OpenSSH per-connection server daemon (10.0.0.1:37570). Feb 13 20:22:16.296142 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 37570 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:16.297277 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:16.300946 systemd-logind[1521]: New session 65 of user core. Feb 13 20:22:16.309601 systemd[1]: Started session-65.scope - Session 65 of User core. Feb 13 20:22:16.413675 sshd[3987]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:16.416037 systemd[1]: sshd@64-10.0.0.6:22-10.0.0.1:37570.service: Deactivated successfully. Feb 13 20:22:16.418517 systemd-logind[1521]: Session 65 logged out. Waiting for processes to exit. Feb 13 20:22:16.419055 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 20:22:16.419948 systemd-logind[1521]: Removed session 65. Feb 13 20:22:16.905839 kubelet[2679]: E0213 20:22:16.905798 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:21.013966 kubelet[2679]: E0213 20:22:21.013882 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:21.426419 systemd[1]: Started sshd@65-10.0.0.6:22-10.0.0.1:37574.service - OpenSSH per-connection server daemon (10.0.0.1:37574). Feb 13 20:22:21.458395 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 37574 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:21.459524 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:21.462911 systemd-logind[1521]: New session 66 of user core. Feb 13 20:22:21.471430 systemd[1]: Started session-66.scope - Session 66 of User core. Feb 13 20:22:21.575482 sshd[4003]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:21.578643 systemd[1]: sshd@65-10.0.0.6:22-10.0.0.1:37574.service: Deactivated successfully. Feb 13 20:22:21.580906 systemd-logind[1521]: Session 66 logged out. Waiting for processes to exit. Feb 13 20:22:21.580977 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 20:22:21.582370 systemd-logind[1521]: Removed session 66. Feb 13 20:22:24.905610 kubelet[2679]: E0213 20:22:24.905502 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:24.906160 kubelet[2679]: E0213 20:22:24.906128 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:22:26.014969 kubelet[2679]: E0213 20:22:26.014916 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:26.090343 update_engine[1527]: I20250213 20:22:26.090263 1527 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:22:26.090694 update_engine[1527]: I20250213 20:22:26.090526 1527 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:22:26.090719 update_engine[1527]: I20250213 20:22:26.090695 1527 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:22:26.099775 update_engine[1527]: E20250213 20:22:26.099731 1527 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:22:26.099828 update_engine[1527]: I20250213 20:22:26.099788 1527 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:22:26.099828 update_engine[1527]: I20250213 20:22:26.099798 1527 omaha_request_action.cc:617] Omaha request response: Feb 13 20:22:26.099892 update_engine[1527]: E20250213 20:22:26.099869 1527 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 20:22:26.099892 update_engine[1527]: I20250213 20:22:26.099890 1527 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 20:22:26.099941 update_engine[1527]: I20250213 20:22:26.099896 1527 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:22:26.099941 update_engine[1527]: I20250213 20:22:26.099900 1527 update_attempter.cc:306] Processing Done. Feb 13 20:22:26.099941 update_engine[1527]: E20250213 20:22:26.099913 1527 update_attempter.cc:619] Update failed. Feb 13 20:22:26.099941 update_engine[1527]: I20250213 20:22:26.099920 1527 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 20:22:26.099941 update_engine[1527]: I20250213 20:22:26.099924 1527 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 20:22:26.099941 update_engine[1527]: I20250213 20:22:26.099929 1527 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 20:22:26.100055 update_engine[1527]: I20250213 20:22:26.099993 1527 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:22:26.100055 update_engine[1527]: I20250213 20:22:26.100014 1527 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:22:26.100055 update_engine[1527]: I20250213 20:22:26.100019 1527 omaha_request_action.cc:272] Request: Feb 13 20:22:26.100055 update_engine[1527]: Feb 13 20:22:26.100055 update_engine[1527]: Feb 13 20:22:26.100055 update_engine[1527]: Feb 13 20:22:26.100055 update_engine[1527]: Feb 13 20:22:26.100055 update_engine[1527]: Feb 13 20:22:26.100055 update_engine[1527]: Feb 13 20:22:26.100055 update_engine[1527]: I20250213 20:22:26.100024 1527 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:22:26.100257 update_engine[1527]: I20250213 20:22:26.100163 1527 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:22:26.100323 locksmithd[1556]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 20:22:26.100540 update_engine[1527]: I20250213 20:22:26.100308 1527 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:22:26.111085 update_engine[1527]: E20250213 20:22:26.111048 1527 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:22:26.111137 update_engine[1527]: I20250213 20:22:26.111099 1527 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:22:26.111137 update_engine[1527]: I20250213 20:22:26.111109 1527 omaha_request_action.cc:617] Omaha request response: Feb 13 20:22:26.111137 update_engine[1527]: I20250213 20:22:26.111115 1527 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:22:26.111137 update_engine[1527]: I20250213 20:22:26.111124 1527 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:22:26.111137 update_engine[1527]: I20250213 20:22:26.111127 1527 update_attempter.cc:306] Processing Done. Feb 13 20:22:26.111137 update_engine[1527]: I20250213 20:22:26.111132 1527 update_attempter.cc:310] Error event sent. Feb 13 20:22:26.111281 update_engine[1527]: I20250213 20:22:26.111141 1527 update_check_scheduler.cc:74] Next update check in 48m41s Feb 13 20:22:26.111419 locksmithd[1556]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 20:22:26.585414 systemd[1]: Started sshd@66-10.0.0.6:22-10.0.0.1:55020.service - OpenSSH per-connection server daemon (10.0.0.1:55020). Feb 13 20:22:26.617339 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 55020 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:26.618560 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:26.622181 systemd-logind[1521]: New session 67 of user core. Feb 13 20:22:26.630449 systemd[1]: Started session-67.scope - Session 67 of User core. Feb 13 20:22:26.735424 sshd[4019]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:26.738447 systemd-logind[1521]: Session 67 logged out. Waiting for processes to exit. Feb 13 20:22:26.738581 systemd[1]: sshd@66-10.0.0.6:22-10.0.0.1:55020.service: Deactivated successfully. Feb 13 20:22:26.740804 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 20:22:26.741662 systemd-logind[1521]: Removed session 67. Feb 13 20:22:31.015980 kubelet[2679]: E0213 20:22:31.015878 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:31.746535 systemd[1]: Started sshd@67-10.0.0.6:22-10.0.0.1:55030.service - OpenSSH per-connection server daemon (10.0.0.1:55030). Feb 13 20:22:31.778467 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 55030 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:31.779623 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:31.783152 systemd-logind[1521]: New session 68 of user core. Feb 13 20:22:31.798497 systemd[1]: Started session-68.scope - Session 68 of User core. Feb 13 20:22:31.901924 sshd[4034]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:31.905641 systemd[1]: sshd@67-10.0.0.6:22-10.0.0.1:55030.service: Deactivated successfully. Feb 13 20:22:31.907758 systemd-logind[1521]: Session 68 logged out. Waiting for processes to exit. Feb 13 20:22:31.908107 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 20:22:31.909191 systemd-logind[1521]: Removed session 68. Feb 13 20:22:36.017377 kubelet[2679]: E0213 20:22:36.017322 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:36.904935 kubelet[2679]: E0213 20:22:36.904853 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:36.905719 kubelet[2679]: E0213 20:22:36.905503 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:22:36.915437 systemd[1]: Started sshd@68-10.0.0.6:22-10.0.0.1:57682.service - OpenSSH per-connection server daemon (10.0.0.1:57682). Feb 13 20:22:36.946941 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 57682 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:36.948069 sshd[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:36.951844 systemd-logind[1521]: New session 69 of user core. Feb 13 20:22:36.957435 systemd[1]: Started session-69.scope - Session 69 of User core. Feb 13 20:22:37.063710 sshd[4051]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:37.066154 systemd[1]: sshd@68-10.0.0.6:22-10.0.0.1:57682.service: Deactivated successfully. Feb 13 20:22:37.068982 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 20:22:37.069038 systemd-logind[1521]: Session 69 logged out. Waiting for processes to exit. Feb 13 20:22:37.070050 systemd-logind[1521]: Removed session 69. Feb 13 20:22:41.018393 kubelet[2679]: E0213 20:22:41.018356 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:42.090527 systemd[1]: Started sshd@69-10.0.0.6:22-10.0.0.1:57686.service - OpenSSH per-connection server daemon (10.0.0.1:57686). Feb 13 20:22:42.122494 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 57686 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:42.123626 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:42.127268 systemd-logind[1521]: New session 70 of user core. Feb 13 20:22:42.138572 systemd[1]: Started session-70.scope - Session 70 of User core. Feb 13 20:22:42.243846 sshd[4068]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:42.246868 systemd[1]: sshd@69-10.0.0.6:22-10.0.0.1:57686.service: Deactivated successfully. Feb 13 20:22:42.249082 systemd-logind[1521]: Session 70 logged out. Waiting for processes to exit. Feb 13 20:22:42.249088 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 20:22:42.250329 systemd-logind[1521]: Removed session 70. Feb 13 20:22:46.019678 kubelet[2679]: E0213 20:22:46.019624 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:47.254438 systemd[1]: Started sshd@70-10.0.0.6:22-10.0.0.1:42130.service - OpenSSH per-connection server daemon (10.0.0.1:42130). Feb 13 20:22:47.287880 sshd[4083]: Accepted publickey for core from 10.0.0.1 port 42130 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:47.289023 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:47.292704 systemd-logind[1521]: New session 71 of user core. Feb 13 20:22:47.302426 systemd[1]: Started session-71.scope - Session 71 of User core. Feb 13 20:22:47.407178 sshd[4083]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:47.410360 systemd[1]: sshd@70-10.0.0.6:22-10.0.0.1:42130.service: Deactivated successfully. Feb 13 20:22:47.412093 systemd-logind[1521]: Session 71 logged out. Waiting for processes to exit. Feb 13 20:22:47.412170 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 20:22:47.412944 systemd-logind[1521]: Removed session 71. Feb 13 20:22:48.905799 kubelet[2679]: E0213 20:22:48.905725 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:51.020919 kubelet[2679]: E0213 20:22:51.020880 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:51.905418 kubelet[2679]: E0213 20:22:51.905205 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:51.906064 kubelet[2679]: E0213 20:22:51.905861 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:22:52.422445 systemd[1]: Started sshd@71-10.0.0.6:22-10.0.0.1:42142.service - OpenSSH per-connection server daemon (10.0.0.1:42142). Feb 13 20:22:52.454612 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 42142 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:52.455772 sshd[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:52.459809 systemd-logind[1521]: New session 72 of user core. Feb 13 20:22:52.469493 systemd[1]: Started session-72.scope - Session 72 of User core. Feb 13 20:22:52.576798 sshd[4099]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:52.579710 systemd[1]: sshd@71-10.0.0.6:22-10.0.0.1:42142.service: Deactivated successfully. Feb 13 20:22:52.582839 systemd-logind[1521]: Session 72 logged out. Waiting for processes to exit. Feb 13 20:22:52.583375 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 20:22:52.584714 systemd-logind[1521]: Removed session 72. Feb 13 20:22:56.022477 kubelet[2679]: E0213 20:22:56.022424 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:57.586452 systemd[1]: Started sshd@72-10.0.0.6:22-10.0.0.1:40042.service - OpenSSH per-connection server daemon (10.0.0.1:40042). Feb 13 20:22:57.618556 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 40042 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:57.619675 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:57.623292 systemd-logind[1521]: New session 73 of user core. Feb 13 20:22:57.629436 systemd[1]: Started session-73.scope - Session 73 of User core. Feb 13 20:22:57.734042 sshd[4118]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:57.737186 systemd[1]: sshd@72-10.0.0.6:22-10.0.0.1:40042.service: Deactivated successfully. Feb 13 20:22:57.739122 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 20:22:57.739148 systemd-logind[1521]: Session 73 logged out. Waiting for processes to exit. Feb 13 20:22:57.740348 systemd-logind[1521]: Removed session 73. Feb 13 20:23:01.023312 kubelet[2679]: E0213 20:23:01.023275 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:02.743449 systemd[1]: Started sshd@73-10.0.0.6:22-10.0.0.1:34768.service - OpenSSH per-connection server daemon (10.0.0.1:34768). Feb 13 20:23:02.775629 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 34768 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:02.776803 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:02.780621 systemd-logind[1521]: New session 74 of user core. Feb 13 20:23:02.788483 systemd[1]: Started session-74.scope - Session 74 of User core. Feb 13 20:23:02.894967 sshd[4134]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:02.898005 systemd[1]: sshd@73-10.0.0.6:22-10.0.0.1:34768.service: Deactivated successfully. Feb 13 20:23:02.899906 systemd-logind[1521]: Session 74 logged out. Waiting for processes to exit. Feb 13 20:23:02.899909 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 20:23:02.901072 systemd-logind[1521]: Removed session 74. Feb 13 20:23:05.905428 kubelet[2679]: E0213 20:23:05.905318 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:05.906263 containerd[1539]: time="2025-02-13T20:23:05.906230779Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:23:06.024701 kubelet[2679]: E0213 20:23:06.024640 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:07.207126 containerd[1539]: time="2025-02-13T20:23:07.207054590Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:23:07.207524 containerd[1539]: time="2025-02-13T20:23:07.207128631Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=13144" Feb 13 20:23:07.207561 kubelet[2679]: E0213 20:23:07.207298 2679 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:23:07.207561 kubelet[2679]: E0213 20:23:07.207350 2679 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:23:07.207825 kubelet[2679]: E0213 20:23:07.207435 2679 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdrbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-ftfqg_kube-flannel(36c00cd4-9622-461d-ac3c-90892608fdc2): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:23:07.207887 kubelet[2679]: E0213 20:23:07.207468 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:23:07.905412 systemd[1]: Started sshd@74-10.0.0.6:22-10.0.0.1:34778.service - OpenSSH per-connection server daemon (10.0.0.1:34778). Feb 13 20:23:07.937820 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 34778 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:07.939019 sshd[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:07.942383 systemd-logind[1521]: New session 75 of user core. Feb 13 20:23:07.953421 systemd[1]: Started session-75.scope - Session 75 of User core. Feb 13 20:23:08.058208 sshd[4149]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:08.061818 systemd[1]: sshd@74-10.0.0.6:22-10.0.0.1:34778.service: Deactivated successfully. Feb 13 20:23:08.063725 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 20:23:08.063729 systemd-logind[1521]: Session 75 logged out. Waiting for processes to exit. Feb 13 20:23:08.064815 systemd-logind[1521]: Removed session 75. Feb 13 20:23:11.025802 kubelet[2679]: E0213 20:23:11.025743 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:13.068416 systemd[1]: Started sshd@75-10.0.0.6:22-10.0.0.1:57516.service - OpenSSH per-connection server daemon (10.0.0.1:57516). Feb 13 20:23:13.100313 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 57516 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:13.101461 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:13.105171 systemd-logind[1521]: New session 76 of user core. Feb 13 20:23:13.117469 systemd[1]: Started session-76.scope - Session 76 of User core. Feb 13 20:23:13.223026 sshd[4166]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:13.226455 systemd[1]: sshd@75-10.0.0.6:22-10.0.0.1:57516.service: Deactivated successfully. Feb 13 20:23:13.228353 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 20:23:13.228716 systemd-logind[1521]: Session 76 logged out. Waiting for processes to exit. Feb 13 20:23:13.229691 systemd-logind[1521]: Removed session 76. Feb 13 20:23:16.027085 kubelet[2679]: E0213 20:23:16.027042 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:17.905131 kubelet[2679]: E0213 20:23:17.905092 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:17.906153 kubelet[2679]: E0213 20:23:17.905900 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:23:18.237764 systemd[1]: Started sshd@76-10.0.0.6:22-10.0.0.1:57522.service - OpenSSH per-connection server daemon (10.0.0.1:57522). Feb 13 20:23:18.269034 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 57522 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:18.270166 sshd[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:18.274158 systemd-logind[1521]: New session 77 of user core. Feb 13 20:23:18.288574 systemd[1]: Started session-77.scope - Session 77 of User core. Feb 13 20:23:18.396559 sshd[4183]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:18.399431 systemd-logind[1521]: Session 77 logged out. Waiting for processes to exit. Feb 13 20:23:18.399534 systemd[1]: sshd@76-10.0.0.6:22-10.0.0.1:57522.service: Deactivated successfully. Feb 13 20:23:18.401708 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 20:23:18.402557 systemd-logind[1521]: Removed session 77. Feb 13 20:23:21.028456 kubelet[2679]: E0213 20:23:21.028415 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:23.413427 systemd[1]: Started sshd@77-10.0.0.6:22-10.0.0.1:57740.service - OpenSSH per-connection server daemon (10.0.0.1:57740). Feb 13 20:23:23.445191 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 57740 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:23.446293 sshd[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:23.450179 systemd-logind[1521]: New session 78 of user core. Feb 13 20:23:23.460490 systemd[1]: Started session-78.scope - Session 78 of User core. Feb 13 20:23:23.565462 sshd[4199]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:23.575437 systemd[1]: Started sshd@78-10.0.0.6:22-10.0.0.1:57746.service - OpenSSH per-connection server daemon (10.0.0.1:57746). Feb 13 20:23:23.575792 systemd[1]: sshd@77-10.0.0.6:22-10.0.0.1:57740.service: Deactivated successfully. Feb 13 20:23:23.578410 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 20:23:23.579115 systemd-logind[1521]: Session 78 logged out. Waiting for processes to exit. Feb 13 20:23:23.579944 systemd-logind[1521]: Removed session 78. Feb 13 20:23:23.607515 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 57746 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:23.608746 sshd[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:23.612163 systemd-logind[1521]: New session 79 of user core. Feb 13 20:23:23.626563 systemd[1]: Started session-79.scope - Session 79 of User core. Feb 13 20:23:23.894342 sshd[4211]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:23.906791 systemd[1]: Started sshd@79-10.0.0.6:22-10.0.0.1:57756.service - OpenSSH per-connection server daemon (10.0.0.1:57756). Feb 13 20:23:23.907242 systemd[1]: sshd@78-10.0.0.6:22-10.0.0.1:57746.service: Deactivated successfully. Feb 13 20:23:23.909755 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 20:23:23.910441 systemd-logind[1521]: Session 79 logged out. Waiting for processes to exit. Feb 13 20:23:23.912146 systemd-logind[1521]: Removed session 79. Feb 13 20:23:23.940875 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 57756 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:23.941974 sshd[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:23.945413 systemd-logind[1521]: New session 80 of user core. Feb 13 20:23:23.960613 systemd[1]: Started session-80.scope - Session 80 of User core. Feb 13 20:23:24.905107 kubelet[2679]: E0213 20:23:24.905073 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:25.133022 sshd[4225]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:25.143516 systemd[1]: Started sshd@80-10.0.0.6:22-10.0.0.1:57766.service - OpenSSH per-connection server daemon (10.0.0.1:57766). Feb 13 20:23:25.143915 systemd[1]: sshd@79-10.0.0.6:22-10.0.0.1:57756.service: Deactivated successfully. Feb 13 20:23:25.146752 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 20:23:25.149441 systemd-logind[1521]: Session 80 logged out. Waiting for processes to exit. Feb 13 20:23:25.151354 systemd-logind[1521]: Removed session 80. Feb 13 20:23:25.180281 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 57766 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:25.181594 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:25.185170 systemd-logind[1521]: New session 81 of user core. Feb 13 20:23:25.194555 systemd[1]: Started session-81.scope - Session 81 of User core. Feb 13 20:23:25.394922 sshd[4245]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:25.402511 systemd[1]: Started sshd@81-10.0.0.6:22-10.0.0.1:57782.service - OpenSSH per-connection server daemon (10.0.0.1:57782). Feb 13 20:23:25.403014 systemd[1]: sshd@80-10.0.0.6:22-10.0.0.1:57766.service: Deactivated successfully. Feb 13 20:23:25.405190 systemd-logind[1521]: Session 81 logged out. Waiting for processes to exit. Feb 13 20:23:25.405308 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 20:23:25.406957 systemd-logind[1521]: Removed session 81. Feb 13 20:23:25.436410 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 57782 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:25.437669 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:25.441483 systemd-logind[1521]: New session 82 of user core. Feb 13 20:23:25.451615 systemd[1]: Started session-82.scope - Session 82 of User core. Feb 13 20:23:25.557346 sshd[4261]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:25.560017 systemd[1]: sshd@81-10.0.0.6:22-10.0.0.1:57782.service: Deactivated successfully. Feb 13 20:23:25.562542 systemd-logind[1521]: Session 82 logged out. Waiting for processes to exit. Feb 13 20:23:25.562720 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 20:23:25.564138 systemd-logind[1521]: Removed session 82. Feb 13 20:23:26.029537 kubelet[2679]: E0213 20:23:26.029504 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:30.567444 systemd[1]: Started sshd@82-10.0.0.6:22-10.0.0.1:57790.service - OpenSSH per-connection server daemon (10.0.0.1:57790). Feb 13 20:23:30.599577 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 57790 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:30.600873 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:30.604870 systemd-logind[1521]: New session 83 of user core. Feb 13 20:23:30.615478 systemd[1]: Started session-83.scope - Session 83 of User core. Feb 13 20:23:30.721164 sshd[4279]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:30.724343 systemd[1]: sshd@82-10.0.0.6:22-10.0.0.1:57790.service: Deactivated successfully. Feb 13 20:23:30.726186 systemd-logind[1521]: Session 83 logged out. Waiting for processes to exit. Feb 13 20:23:30.726265 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 20:23:30.727150 systemd-logind[1521]: Removed session 83. Feb 13 20:23:31.031506 kubelet[2679]: E0213 20:23:31.031459 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:31.905653 kubelet[2679]: E0213 20:23:31.905546 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:31.906744 kubelet[2679]: E0213 20:23:31.906691 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:23:32.905760 kubelet[2679]: E0213 20:23:32.905720 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:35.733438 systemd[1]: Started sshd@83-10.0.0.6:22-10.0.0.1:60492.service - OpenSSH per-connection server daemon (10.0.0.1:60492). Feb 13 20:23:35.765340 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 60492 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:35.766861 sshd[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:35.770886 systemd-logind[1521]: New session 84 of user core. Feb 13 20:23:35.782437 systemd[1]: Started session-84.scope - Session 84 of User core. Feb 13 20:23:35.885420 sshd[4295]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:35.889015 systemd[1]: sshd@83-10.0.0.6:22-10.0.0.1:60492.service: Deactivated successfully. Feb 13 20:23:35.891132 systemd-logind[1521]: Session 84 logged out. Waiting for processes to exit. Feb 13 20:23:35.891235 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 20:23:35.892404 systemd-logind[1521]: Removed session 84. Feb 13 20:23:36.032077 kubelet[2679]: E0213 20:23:36.032041 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:40.895425 systemd[1]: Started sshd@84-10.0.0.6:22-10.0.0.1:60500.service - OpenSSH per-connection server daemon (10.0.0.1:60500). Feb 13 20:23:40.927514 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 60500 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:40.928698 sshd[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:40.932080 systemd-logind[1521]: New session 85 of user core. Feb 13 20:23:40.941478 systemd[1]: Started session-85.scope - Session 85 of User core. Feb 13 20:23:41.033300 kubelet[2679]: E0213 20:23:41.033237 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:41.046412 sshd[4311]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:41.049749 systemd[1]: sshd@84-10.0.0.6:22-10.0.0.1:60500.service: Deactivated successfully. Feb 13 20:23:41.051747 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 20:23:41.052201 systemd-logind[1521]: Session 85 logged out. Waiting for processes to exit. Feb 13 20:23:41.053527 systemd-logind[1521]: Removed session 85. Feb 13 20:23:42.905173 kubelet[2679]: E0213 20:23:42.905075 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:45.905357 kubelet[2679]: E0213 20:23:45.905312 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:45.906150 kubelet[2679]: E0213 20:23:45.906111 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:23:46.034624 kubelet[2679]: E0213 20:23:46.034586 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:46.056437 systemd[1]: Started sshd@85-10.0.0.6:22-10.0.0.1:43036.service - OpenSSH per-connection server daemon (10.0.0.1:43036). Feb 13 20:23:46.088155 sshd[4328]: Accepted publickey for core from 10.0.0.1 port 43036 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:46.089313 sshd[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:46.093299 systemd-logind[1521]: New session 86 of user core. Feb 13 20:23:46.102434 systemd[1]: Started session-86.scope - Session 86 of User core. Feb 13 20:23:46.206931 sshd[4328]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:46.210056 systemd[1]: sshd@85-10.0.0.6:22-10.0.0.1:43036.service: Deactivated successfully. Feb 13 20:23:46.212758 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 20:23:46.212937 systemd-logind[1521]: Session 86 logged out. Waiting for processes to exit. Feb 13 20:23:46.214315 systemd-logind[1521]: Removed session 86. Feb 13 20:23:51.035307 kubelet[2679]: E0213 20:23:51.035204 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:51.216483 systemd[1]: Started sshd@86-10.0.0.6:22-10.0.0.1:43048.service - OpenSSH per-connection server daemon (10.0.0.1:43048). Feb 13 20:23:51.249003 sshd[4344]: Accepted publickey for core from 10.0.0.1 port 43048 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:51.250154 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:51.253898 systemd-logind[1521]: New session 87 of user core. Feb 13 20:23:51.264435 systemd[1]: Started session-87.scope - Session 87 of User core. Feb 13 20:23:51.369565 sshd[4344]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:51.373270 systemd[1]: sshd@86-10.0.0.6:22-10.0.0.1:43048.service: Deactivated successfully. Feb 13 20:23:51.375111 systemd-logind[1521]: Session 87 logged out. Waiting for processes to exit. Feb 13 20:23:51.375162 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 20:23:51.377281 systemd-logind[1521]: Removed session 87. Feb 13 20:23:56.035972 kubelet[2679]: E0213 20:23:56.035927 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:56.379552 systemd[1]: Started sshd@87-10.0.0.6:22-10.0.0.1:45502.service - OpenSSH per-connection server daemon (10.0.0.1:45502). Feb 13 20:23:56.414894 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 45502 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:56.416077 sshd[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:56.419800 systemd-logind[1521]: New session 88 of user core. Feb 13 20:23:56.427594 systemd[1]: Started session-88.scope - Session 88 of User core. Feb 13 20:23:56.533655 sshd[4362]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:56.536706 systemd[1]: sshd@87-10.0.0.6:22-10.0.0.1:45502.service: Deactivated successfully. Feb 13 20:23:56.538556 systemd-logind[1521]: Session 88 logged out. Waiting for processes to exit. Feb 13 20:23:56.538643 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 20:23:56.539603 systemd-logind[1521]: Removed session 88. Feb 13 20:23:56.905821 kubelet[2679]: E0213 20:23:56.905794 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:57.905706 kubelet[2679]: E0213 20:23:57.905379 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:57.906369 kubelet[2679]: E0213 20:23:57.906344 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:24:01.036950 kubelet[2679]: E0213 20:24:01.036911 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:01.545444 systemd[1]: Started sshd@88-10.0.0.6:22-10.0.0.1:45506.service - OpenSSH per-connection server daemon (10.0.0.1:45506). Feb 13 20:24:01.577344 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 45506 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:01.578576 sshd[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:01.582027 systemd-logind[1521]: New session 89 of user core. Feb 13 20:24:01.589443 systemd[1]: Started session-89.scope - Session 89 of User core. Feb 13 20:24:01.695466 sshd[4377]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:01.698870 systemd[1]: sshd@88-10.0.0.6:22-10.0.0.1:45506.service: Deactivated successfully. Feb 13 20:24:01.700855 systemd-logind[1521]: Session 89 logged out. Waiting for processes to exit. Feb 13 20:24:01.700954 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 20:24:01.701903 systemd-logind[1521]: Removed session 89. Feb 13 20:24:06.038013 kubelet[2679]: E0213 20:24:06.037958 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:06.703427 systemd[1]: Started sshd@89-10.0.0.6:22-10.0.0.1:34726.service - OpenSSH per-connection server daemon (10.0.0.1:34726). Feb 13 20:24:06.735199 sshd[4393]: Accepted publickey for core from 10.0.0.1 port 34726 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:06.736364 sshd[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:06.740276 systemd-logind[1521]: New session 90 of user core. Feb 13 20:24:06.754432 systemd[1]: Started session-90.scope - Session 90 of User core. Feb 13 20:24:06.857658 sshd[4393]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:06.860777 systemd[1]: sshd@89-10.0.0.6:22-10.0.0.1:34726.service: Deactivated successfully. Feb 13 20:24:06.862935 systemd-logind[1521]: Session 90 logged out. Waiting for processes to exit. Feb 13 20:24:06.863068 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 20:24:06.864132 systemd-logind[1521]: Removed session 90. Feb 13 20:24:11.038690 kubelet[2679]: E0213 20:24:11.038643 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:11.873471 systemd[1]: Started sshd@90-10.0.0.6:22-10.0.0.1:34736.service - OpenSSH per-connection server daemon (10.0.0.1:34736). Feb 13 20:24:11.905061 kubelet[2679]: E0213 20:24:11.904862 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:11.905840 kubelet[2679]: E0213 20:24:11.905807 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:24:11.906228 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 34736 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:11.907586 sshd[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:11.911192 systemd-logind[1521]: New session 91 of user core. Feb 13 20:24:11.922453 systemd[1]: Started session-91.scope - Session 91 of User core. Feb 13 20:24:12.026837 sshd[4411]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:12.029444 systemd[1]: sshd@90-10.0.0.6:22-10.0.0.1:34736.service: Deactivated successfully. Feb 13 20:24:12.031973 systemd-logind[1521]: Session 91 logged out. Waiting for processes to exit. Feb 13 20:24:12.032393 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 20:24:12.033245 systemd-logind[1521]: Removed session 91. Feb 13 20:24:16.039570 kubelet[2679]: E0213 20:24:16.039526 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:17.038431 systemd[1]: Started sshd@91-10.0.0.6:22-10.0.0.1:46814.service - OpenSSH per-connection server daemon (10.0.0.1:46814). Feb 13 20:24:17.070489 sshd[4427]: Accepted publickey for core from 10.0.0.1 port 46814 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:17.071661 sshd[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:17.074941 systemd-logind[1521]: New session 92 of user core. Feb 13 20:24:17.081453 systemd[1]: Started session-92.scope - Session 92 of User core. Feb 13 20:24:17.186227 sshd[4427]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:17.189332 systemd[1]: sshd@91-10.0.0.6:22-10.0.0.1:46814.service: Deactivated successfully. Feb 13 20:24:17.191254 systemd[1]: session-92.scope: Deactivated successfully. Feb 13 20:24:17.191261 systemd-logind[1521]: Session 92 logged out. Waiting for processes to exit. Feb 13 20:24:17.192694 systemd-logind[1521]: Removed session 92. Feb 13 20:24:21.040801 kubelet[2679]: E0213 20:24:21.040760 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:22.200431 systemd[1]: Started sshd@92-10.0.0.6:22-10.0.0.1:46816.service - OpenSSH per-connection server daemon (10.0.0.1:46816). Feb 13 20:24:22.232458 sshd[4443]: Accepted publickey for core from 10.0.0.1 port 46816 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:22.233644 sshd[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:22.237671 systemd-logind[1521]: New session 93 of user core. Feb 13 20:24:22.243553 systemd[1]: Started session-93.scope - Session 93 of User core. Feb 13 20:24:22.348838 sshd[4443]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:22.351942 systemd[1]: sshd@92-10.0.0.6:22-10.0.0.1:46816.service: Deactivated successfully. Feb 13 20:24:22.353787 systemd-logind[1521]: Session 93 logged out. Waiting for processes to exit. Feb 13 20:24:22.353870 systemd[1]: session-93.scope: Deactivated successfully. Feb 13 20:24:22.355095 systemd-logind[1521]: Removed session 93. Feb 13 20:24:24.905206 kubelet[2679]: E0213 20:24:24.905145 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:24.905862 kubelet[2679]: E0213 20:24:24.905804 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:24:26.042234 kubelet[2679]: E0213 20:24:26.042178 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:26.905962 kubelet[2679]: E0213 20:24:26.905892 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:27.359434 systemd[1]: Started sshd@93-10.0.0.6:22-10.0.0.1:39680.service - OpenSSH per-connection server daemon (10.0.0.1:39680). Feb 13 20:24:27.391442 sshd[4458]: Accepted publickey for core from 10.0.0.1 port 39680 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:27.392655 sshd[4458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:27.396106 systemd-logind[1521]: New session 94 of user core. Feb 13 20:24:27.404447 systemd[1]: Started session-94.scope - Session 94 of User core. Feb 13 20:24:27.509001 sshd[4458]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:27.512200 systemd-logind[1521]: Session 94 logged out. Waiting for processes to exit. Feb 13 20:24:27.512612 systemd[1]: sshd@93-10.0.0.6:22-10.0.0.1:39680.service: Deactivated successfully. Feb 13 20:24:27.514461 systemd[1]: session-94.scope: Deactivated successfully. Feb 13 20:24:27.515261 systemd-logind[1521]: Removed session 94. Feb 13 20:24:31.043273 kubelet[2679]: E0213 20:24:31.043227 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:32.529446 systemd[1]: Started sshd@94-10.0.0.6:22-10.0.0.1:57176.service - OpenSSH per-connection server daemon (10.0.0.1:57176). Feb 13 20:24:32.561301 sshd[4476]: Accepted publickey for core from 10.0.0.1 port 57176 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:32.562771 sshd[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:32.566902 systemd-logind[1521]: New session 95 of user core. Feb 13 20:24:32.577429 systemd[1]: Started session-95.scope - Session 95 of User core. Feb 13 20:24:32.680992 sshd[4476]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:32.684348 systemd[1]: sshd@94-10.0.0.6:22-10.0.0.1:57176.service: Deactivated successfully. Feb 13 20:24:32.686267 systemd-logind[1521]: Session 95 logged out. Waiting for processes to exit. Feb 13 20:24:32.686310 systemd[1]: session-95.scope: Deactivated successfully. Feb 13 20:24:32.687432 systemd-logind[1521]: Removed session 95. Feb 13 20:24:36.044340 kubelet[2679]: E0213 20:24:36.044292 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:37.691426 systemd[1]: Started sshd@95-10.0.0.6:22-10.0.0.1:57188.service - OpenSSH per-connection server daemon (10.0.0.1:57188). Feb 13 20:24:37.723288 sshd[4493]: Accepted publickey for core from 10.0.0.1 port 57188 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:37.724447 sshd[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:37.728280 systemd-logind[1521]: New session 96 of user core. Feb 13 20:24:37.732628 systemd[1]: Started session-96.scope - Session 96 of User core. Feb 13 20:24:37.836109 sshd[4493]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:37.839252 systemd[1]: sshd@95-10.0.0.6:22-10.0.0.1:57188.service: Deactivated successfully. Feb 13 20:24:37.841206 systemd-logind[1521]: Session 96 logged out. Waiting for processes to exit. Feb 13 20:24:37.841285 systemd[1]: session-96.scope: Deactivated successfully. Feb 13 20:24:37.842091 systemd-logind[1521]: Removed session 96. Feb 13 20:24:37.905531 kubelet[2679]: E0213 20:24:37.905307 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:37.906287 kubelet[2679]: E0213 20:24:37.906097 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:24:41.045823 kubelet[2679]: E0213 20:24:41.045777 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:42.850442 systemd[1]: Started sshd@96-10.0.0.6:22-10.0.0.1:37396.service - OpenSSH per-connection server daemon (10.0.0.1:37396). Feb 13 20:24:42.882646 sshd[4511]: Accepted publickey for core from 10.0.0.1 port 37396 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:42.883858 sshd[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:42.888035 systemd-logind[1521]: New session 97 of user core. Feb 13 20:24:42.895447 systemd[1]: Started session-97.scope - Session 97 of User core. Feb 13 20:24:43.000261 sshd[4511]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:43.003404 systemd[1]: sshd@96-10.0.0.6:22-10.0.0.1:37396.service: Deactivated successfully. Feb 13 20:24:43.005320 systemd[1]: session-97.scope: Deactivated successfully. Feb 13 20:24:43.005348 systemd-logind[1521]: Session 97 logged out. Waiting for processes to exit. Feb 13 20:24:43.006569 systemd-logind[1521]: Removed session 97. Feb 13 20:24:46.046590 kubelet[2679]: E0213 20:24:46.046557 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:48.012438 systemd[1]: Started sshd@97-10.0.0.6:22-10.0.0.1:37402.service - OpenSSH per-connection server daemon (10.0.0.1:37402). Feb 13 20:24:48.044081 sshd[4527]: Accepted publickey for core from 10.0.0.1 port 37402 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:48.045192 sshd[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:48.049433 systemd-logind[1521]: New session 98 of user core. Feb 13 20:24:48.061536 systemd[1]: Started session-98.scope - Session 98 of User core. Feb 13 20:24:48.166416 sshd[4527]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:48.169178 systemd-logind[1521]: Session 98 logged out. Waiting for processes to exit. Feb 13 20:24:48.169434 systemd[1]: sshd@97-10.0.0.6:22-10.0.0.1:37402.service: Deactivated successfully. Feb 13 20:24:48.171776 systemd[1]: session-98.scope: Deactivated successfully. Feb 13 20:24:48.172911 systemd-logind[1521]: Removed session 98. Feb 13 20:24:49.906350 kubelet[2679]: E0213 20:24:49.906256 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:50.904973 kubelet[2679]: E0213 20:24:50.904931 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:50.905595 kubelet[2679]: E0213 20:24:50.905566 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:24:51.047691 kubelet[2679]: E0213 20:24:51.047656 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:53.179423 systemd[1]: Started sshd@98-10.0.0.6:22-10.0.0.1:37224.service - OpenSSH per-connection server daemon (10.0.0.1:37224). Feb 13 20:24:53.211527 sshd[4543]: Accepted publickey for core from 10.0.0.1 port 37224 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:53.212672 sshd[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:53.216880 systemd-logind[1521]: New session 99 of user core. Feb 13 20:24:53.223599 systemd[1]: Started session-99.scope - Session 99 of User core. Feb 13 20:24:53.330462 sshd[4543]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:53.333629 systemd[1]: sshd@98-10.0.0.6:22-10.0.0.1:37224.service: Deactivated successfully. Feb 13 20:24:53.335799 systemd-logind[1521]: Session 99 logged out. Waiting for processes to exit. Feb 13 20:24:53.336245 systemd[1]: session-99.scope: Deactivated successfully. Feb 13 20:24:53.337325 systemd-logind[1521]: Removed session 99. Feb 13 20:24:56.048283 kubelet[2679]: E0213 20:24:56.048186 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:58.349427 systemd[1]: Started sshd@99-10.0.0.6:22-10.0.0.1:37240.service - OpenSSH per-connection server daemon (10.0.0.1:37240). Feb 13 20:24:58.381446 sshd[4560]: Accepted publickey for core from 10.0.0.1 port 37240 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:58.382680 sshd[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:58.386609 systemd-logind[1521]: New session 100 of user core. Feb 13 20:24:58.398428 systemd[1]: Started session-100.scope - Session 100 of User core. Feb 13 20:24:58.501703 sshd[4560]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:58.504251 systemd[1]: sshd@99-10.0.0.6:22-10.0.0.1:37240.service: Deactivated successfully. Feb 13 20:24:58.506535 systemd-logind[1521]: Session 100 logged out. Waiting for processes to exit. Feb 13 20:24:58.506706 systemd[1]: session-100.scope: Deactivated successfully. Feb 13 20:24:58.507702 systemd-logind[1521]: Removed session 100. Feb 13 20:25:01.049017 kubelet[2679]: E0213 20:25:01.048973 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:02.905455 kubelet[2679]: E0213 20:25:02.905410 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:02.906077 kubelet[2679]: E0213 20:25:02.906034 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:25:03.513420 systemd[1]: Started sshd@100-10.0.0.6:22-10.0.0.1:52250.service - OpenSSH per-connection server daemon (10.0.0.1:52250). Feb 13 20:25:03.545512 sshd[4577]: Accepted publickey for core from 10.0.0.1 port 52250 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:03.546636 sshd[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:03.550419 systemd-logind[1521]: New session 101 of user core. Feb 13 20:25:03.564429 systemd[1]: Started session-101.scope - Session 101 of User core. Feb 13 20:25:03.670896 sshd[4577]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:03.673911 systemd[1]: sshd@100-10.0.0.6:22-10.0.0.1:52250.service: Deactivated successfully. Feb 13 20:25:03.675743 systemd[1]: session-101.scope: Deactivated successfully. Feb 13 20:25:03.675769 systemd-logind[1521]: Session 101 logged out. Waiting for processes to exit. Feb 13 20:25:03.677158 systemd-logind[1521]: Removed session 101. Feb 13 20:25:06.050026 kubelet[2679]: E0213 20:25:06.049982 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:08.685422 systemd[1]: Started sshd@101-10.0.0.6:22-10.0.0.1:52254.service - OpenSSH per-connection server daemon (10.0.0.1:52254). Feb 13 20:25:08.718198 sshd[4592]: Accepted publickey for core from 10.0.0.1 port 52254 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:08.719377 sshd[4592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:08.722790 systemd-logind[1521]: New session 102 of user core. Feb 13 20:25:08.733452 systemd[1]: Started session-102.scope - Session 102 of User core. Feb 13 20:25:08.837603 sshd[4592]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:08.840884 systemd[1]: sshd@101-10.0.0.6:22-10.0.0.1:52254.service: Deactivated successfully. Feb 13 20:25:08.842774 systemd-logind[1521]: Session 102 logged out. Waiting for processes to exit. Feb 13 20:25:08.842816 systemd[1]: session-102.scope: Deactivated successfully. Feb 13 20:25:08.844129 systemd-logind[1521]: Removed session 102. Feb 13 20:25:11.051442 kubelet[2679]: E0213 20:25:11.051392 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:12.905978 kubelet[2679]: E0213 20:25:12.905927 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:13.852423 systemd[1]: Started sshd@102-10.0.0.6:22-10.0.0.1:43090.service - OpenSSH per-connection server daemon (10.0.0.1:43090). Feb 13 20:25:13.884398 sshd[4609]: Accepted publickey for core from 10.0.0.1 port 43090 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:13.885561 sshd[4609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:13.889657 systemd-logind[1521]: New session 103 of user core. Feb 13 20:25:13.900444 systemd[1]: Started session-103.scope - Session 103 of User core. Feb 13 20:25:14.006200 sshd[4609]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:14.008795 systemd[1]: sshd@102-10.0.0.6:22-10.0.0.1:43090.service: Deactivated successfully. Feb 13 20:25:14.011156 systemd-logind[1521]: Session 103 logged out. Waiting for processes to exit. Feb 13 20:25:14.011324 systemd[1]: session-103.scope: Deactivated successfully. Feb 13 20:25:14.012527 systemd-logind[1521]: Removed session 103. Feb 13 20:25:16.052295 kubelet[2679]: E0213 20:25:16.052242 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:16.905041 kubelet[2679]: E0213 20:25:16.904928 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:16.905795 kubelet[2679]: E0213 20:25:16.905520 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:25:17.905190 kubelet[2679]: E0213 20:25:17.905097 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:19.017424 systemd[1]: Started sshd@103-10.0.0.6:22-10.0.0.1:43098.service - OpenSSH per-connection server daemon (10.0.0.1:43098). Feb 13 20:25:19.049005 sshd[4624]: Accepted publickey for core from 10.0.0.1 port 43098 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:19.050138 sshd[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:19.054135 systemd-logind[1521]: New session 104 of user core. Feb 13 20:25:19.062432 systemd[1]: Started session-104.scope - Session 104 of User core. Feb 13 20:25:19.167089 sshd[4624]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:19.169558 systemd[1]: sshd@103-10.0.0.6:22-10.0.0.1:43098.service: Deactivated successfully. Feb 13 20:25:19.172510 systemd-logind[1521]: Session 104 logged out. Waiting for processes to exit. Feb 13 20:25:19.172937 systemd[1]: session-104.scope: Deactivated successfully. Feb 13 20:25:19.174018 systemd-logind[1521]: Removed session 104. Feb 13 20:25:21.053090 kubelet[2679]: E0213 20:25:21.053001 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:24.179438 systemd[1]: Started sshd@104-10.0.0.6:22-10.0.0.1:41052.service - OpenSSH per-connection server daemon (10.0.0.1:41052). Feb 13 20:25:24.211579 sshd[4640]: Accepted publickey for core from 10.0.0.1 port 41052 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:24.212703 sshd[4640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:24.216342 systemd-logind[1521]: New session 105 of user core. Feb 13 20:25:24.222427 systemd[1]: Started session-105.scope - Session 105 of User core. Feb 13 20:25:24.329135 sshd[4640]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:24.332811 systemd[1]: sshd@104-10.0.0.6:22-10.0.0.1:41052.service: Deactivated successfully. Feb 13 20:25:24.334678 systemd-logind[1521]: Session 105 logged out. Waiting for processes to exit. Feb 13 20:25:24.334745 systemd[1]: session-105.scope: Deactivated successfully. Feb 13 20:25:24.336144 systemd-logind[1521]: Removed session 105. Feb 13 20:25:26.054373 kubelet[2679]: E0213 20:25:26.054335 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:28.905768 kubelet[2679]: E0213 20:25:28.905725 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:28.906377 kubelet[2679]: E0213 20:25:28.906335 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:25:29.354424 systemd[1]: Started sshd@105-10.0.0.6:22-10.0.0.1:41054.service - OpenSSH per-connection server daemon (10.0.0.1:41054). Feb 13 20:25:29.386518 sshd[4655]: Accepted publickey for core from 10.0.0.1 port 41054 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:29.387635 sshd[4655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:29.391276 systemd-logind[1521]: New session 106 of user core. Feb 13 20:25:29.400431 systemd[1]: Started session-106.scope - Session 106 of User core. Feb 13 20:25:29.504890 sshd[4655]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:29.508267 systemd[1]: sshd@105-10.0.0.6:22-10.0.0.1:41054.service: Deactivated successfully. Feb 13 20:25:29.510259 systemd-logind[1521]: Session 106 logged out. Waiting for processes to exit. Feb 13 20:25:29.510265 systemd[1]: session-106.scope: Deactivated successfully. Feb 13 20:25:29.511615 systemd-logind[1521]: Removed session 106. Feb 13 20:25:31.055650 kubelet[2679]: E0213 20:25:31.055609 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:34.514430 systemd[1]: Started sshd@106-10.0.0.6:22-10.0.0.1:51946.service - OpenSSH per-connection server daemon (10.0.0.1:51946). Feb 13 20:25:34.546670 sshd[4671]: Accepted publickey for core from 10.0.0.1 port 51946 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:34.547800 sshd[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:34.551194 systemd-logind[1521]: New session 107 of user core. Feb 13 20:25:34.560483 systemd[1]: Started session-107.scope - Session 107 of User core. Feb 13 20:25:34.665470 sshd[4671]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:34.668795 systemd[1]: sshd@106-10.0.0.6:22-10.0.0.1:51946.service: Deactivated successfully. Feb 13 20:25:34.670825 systemd[1]: session-107.scope: Deactivated successfully. Feb 13 20:25:34.670835 systemd-logind[1521]: Session 107 logged out. Waiting for processes to exit. Feb 13 20:25:34.671962 systemd-logind[1521]: Removed session 107. Feb 13 20:25:36.056471 kubelet[2679]: E0213 20:25:36.056435 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:39.685425 systemd[1]: Started sshd@107-10.0.0.6:22-10.0.0.1:51952.service - OpenSSH per-connection server daemon (10.0.0.1:51952). Feb 13 20:25:39.718055 sshd[4687]: Accepted publickey for core from 10.0.0.1 port 51952 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:39.719268 sshd[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:39.725758 systemd-logind[1521]: New session 108 of user core. Feb 13 20:25:39.732437 systemd[1]: Started session-108.scope - Session 108 of User core. Feb 13 20:25:39.843410 sshd[4687]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:39.845841 systemd[1]: sshd@107-10.0.0.6:22-10.0.0.1:51952.service: Deactivated successfully. Feb 13 20:25:39.850191 systemd[1]: session-108.scope: Deactivated successfully. Feb 13 20:25:39.850572 systemd-logind[1521]: Session 108 logged out. Waiting for processes to exit. Feb 13 20:25:39.851464 systemd-logind[1521]: Removed session 108. Feb 13 20:25:41.057092 kubelet[2679]: E0213 20:25:41.057046 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:43.905820 kubelet[2679]: E0213 20:25:43.905472 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:43.906476 kubelet[2679]: E0213 20:25:43.906263 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:25:44.855441 systemd[1]: Started sshd@108-10.0.0.6:22-10.0.0.1:37852.service - OpenSSH per-connection server daemon (10.0.0.1:37852). Feb 13 20:25:44.887699 sshd[4704]: Accepted publickey for core from 10.0.0.1 port 37852 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:44.888856 sshd[4704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:44.892199 systemd-logind[1521]: New session 109 of user core. Feb 13 20:25:44.898507 systemd[1]: Started session-109.scope - Session 109 of User core. Feb 13 20:25:45.002435 sshd[4704]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:45.005022 systemd[1]: sshd@108-10.0.0.6:22-10.0.0.1:37852.service: Deactivated successfully. Feb 13 20:25:45.007400 systemd-logind[1521]: Session 109 logged out. Waiting for processes to exit. Feb 13 20:25:45.007558 systemd[1]: session-109.scope: Deactivated successfully. Feb 13 20:25:45.008848 systemd-logind[1521]: Removed session 109. Feb 13 20:25:46.057934 kubelet[2679]: E0213 20:25:46.057886 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:50.019526 systemd[1]: Started sshd@109-10.0.0.6:22-10.0.0.1:37862.service - OpenSSH per-connection server daemon (10.0.0.1:37862). Feb 13 20:25:50.052034 sshd[4720]: Accepted publickey for core from 10.0.0.1 port 37862 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:50.053200 sshd[4720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:50.056704 systemd-logind[1521]: New session 110 of user core. Feb 13 20:25:50.069425 systemd[1]: Started session-110.scope - Session 110 of User core. Feb 13 20:25:50.177731 sshd[4720]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:50.181149 systemd[1]: sshd@109-10.0.0.6:22-10.0.0.1:37862.service: Deactivated successfully. Feb 13 20:25:50.182952 systemd-logind[1521]: Session 110 logged out. Waiting for processes to exit. Feb 13 20:25:50.182988 systemd[1]: session-110.scope: Deactivated successfully. Feb 13 20:25:50.184554 systemd-logind[1521]: Removed session 110. Feb 13 20:25:51.059294 kubelet[2679]: E0213 20:25:51.059255 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:54.904730 kubelet[2679]: E0213 20:25:54.904687 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:54.905341 kubelet[2679]: E0213 20:25:54.905295 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:25:55.192532 systemd[1]: Started sshd@110-10.0.0.6:22-10.0.0.1:39450.service - OpenSSH per-connection server daemon (10.0.0.1:39450). Feb 13 20:25:55.224465 sshd[4735]: Accepted publickey for core from 10.0.0.1 port 39450 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:55.225705 sshd[4735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:55.229192 systemd-logind[1521]: New session 111 of user core. Feb 13 20:25:55.241438 systemd[1]: Started session-111.scope - Session 111 of User core. Feb 13 20:25:55.347617 sshd[4735]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:55.350645 systemd[1]: sshd@110-10.0.0.6:22-10.0.0.1:39450.service: Deactivated successfully. Feb 13 20:25:55.353825 systemd[1]: session-111.scope: Deactivated successfully. Feb 13 20:25:55.354450 systemd-logind[1521]: Session 111 logged out. Waiting for processes to exit. Feb 13 20:25:55.355195 systemd-logind[1521]: Removed session 111. Feb 13 20:25:55.905918 kubelet[2679]: E0213 20:25:55.905872 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:56.059908 kubelet[2679]: E0213 20:25:56.059871 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:00.362428 systemd[1]: Started sshd@111-10.0.0.6:22-10.0.0.1:39466.service - OpenSSH per-connection server daemon (10.0.0.1:39466). Feb 13 20:26:00.394659 sshd[4752]: Accepted publickey for core from 10.0.0.1 port 39466 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:00.396073 sshd[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:00.399626 systemd-logind[1521]: New session 112 of user core. Feb 13 20:26:00.408437 systemd[1]: Started session-112.scope - Session 112 of User core. Feb 13 20:26:00.514110 sshd[4752]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:00.517286 systemd[1]: sshd@111-10.0.0.6:22-10.0.0.1:39466.service: Deactivated successfully. Feb 13 20:26:00.519265 systemd-logind[1521]: Session 112 logged out. Waiting for processes to exit. Feb 13 20:26:00.519420 systemd[1]: session-112.scope: Deactivated successfully. Feb 13 20:26:00.520823 systemd-logind[1521]: Removed session 112. Feb 13 20:26:01.061139 kubelet[2679]: E0213 20:26:01.061083 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:02.905398 kubelet[2679]: E0213 20:26:02.905358 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:26:05.529545 systemd[1]: Started sshd@112-10.0.0.6:22-10.0.0.1:40246.service - OpenSSH per-connection server daemon (10.0.0.1:40246). Feb 13 20:26:05.561793 sshd[4767]: Accepted publickey for core from 10.0.0.1 port 40246 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:05.562942 sshd[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:05.566275 systemd-logind[1521]: New session 113 of user core. Feb 13 20:26:05.576436 systemd[1]: Started session-113.scope - Session 113 of User core. Feb 13 20:26:05.684574 sshd[4767]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:05.687705 systemd[1]: sshd@112-10.0.0.6:22-10.0.0.1:40246.service: Deactivated successfully. Feb 13 20:26:05.690429 systemd-logind[1521]: Session 113 logged out. Waiting for processes to exit. Feb 13 20:26:05.690517 systemd[1]: session-113.scope: Deactivated successfully. Feb 13 20:26:05.691458 systemd-logind[1521]: Removed session 113. Feb 13 20:26:06.062594 kubelet[2679]: E0213 20:26:06.062552 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:06.905228 kubelet[2679]: E0213 20:26:06.905168 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:26:06.908471 kubelet[2679]: E0213 20:26:06.908416 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:26:10.699439 systemd[1]: Started sshd@113-10.0.0.6:22-10.0.0.1:40250.service - OpenSSH per-connection server daemon (10.0.0.1:40250). Feb 13 20:26:10.731420 sshd[4782]: Accepted publickey for core from 10.0.0.1 port 40250 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:10.732518 sshd[4782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:10.736807 systemd-logind[1521]: New session 114 of user core. Feb 13 20:26:10.743530 systemd[1]: Started session-114.scope - Session 114 of User core. Feb 13 20:26:10.846820 sshd[4782]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:10.849788 systemd[1]: sshd@113-10.0.0.6:22-10.0.0.1:40250.service: Deactivated successfully. Feb 13 20:26:10.851760 systemd-logind[1521]: Session 114 logged out. Waiting for processes to exit. Feb 13 20:26:10.851761 systemd[1]: session-114.scope: Deactivated successfully. Feb 13 20:26:10.852782 systemd-logind[1521]: Removed session 114. Feb 13 20:26:11.064052 kubelet[2679]: E0213 20:26:11.064001 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:15.861425 systemd[1]: Started sshd@114-10.0.0.6:22-10.0.0.1:57470.service - OpenSSH per-connection server daemon (10.0.0.1:57470). Feb 13 20:26:15.894309 sshd[4800]: Accepted publickey for core from 10.0.0.1 port 57470 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:15.895450 sshd[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:15.898754 systemd-logind[1521]: New session 115 of user core. Feb 13 20:26:15.906707 systemd[1]: Started session-115.scope - Session 115 of User core. Feb 13 20:26:16.010177 sshd[4800]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:16.012684 systemd[1]: sshd@114-10.0.0.6:22-10.0.0.1:57470.service: Deactivated successfully. Feb 13 20:26:16.014969 systemd-logind[1521]: Session 115 logged out. Waiting for processes to exit. Feb 13 20:26:16.015136 systemd[1]: session-115.scope: Deactivated successfully. Feb 13 20:26:16.016576 systemd-logind[1521]: Removed session 115. Feb 13 20:26:16.065104 kubelet[2679]: E0213 20:26:16.065061 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:20.905002 kubelet[2679]: E0213 20:26:20.904950 2679 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:26:20.905566 kubelet[2679]: E0213 20:26:20.905492 2679 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-ftfqg" podUID="36c00cd4-9622-461d-ac3c-90892608fdc2" Feb 13 20:26:21.024424 systemd[1]: Started sshd@115-10.0.0.6:22-10.0.0.1:57484.service - OpenSSH per-connection server daemon (10.0.0.1:57484). Feb 13 20:26:21.056268 sshd[4817]: Accepted publickey for core from 10.0.0.1 port 57484 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:21.057461 sshd[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:21.061269 systemd-logind[1521]: New session 116 of user core. Feb 13 20:26:21.066511 kubelet[2679]: E0213 20:26:21.066470 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:21.075463 systemd[1]: Started session-116.scope - Session 116 of User core. Feb 13 20:26:21.179544 sshd[4817]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:21.182204 systemd[1]: sshd@115-10.0.0.6:22-10.0.0.1:57484.service: Deactivated successfully. Feb 13 20:26:21.184838 systemd-logind[1521]: Session 116 logged out. Waiting for processes to exit. Feb 13 20:26:21.185526 systemd[1]: session-116.scope: Deactivated successfully. Feb 13 20:26:21.186386 systemd-logind[1521]: Removed session 116. Feb 13 20:26:26.067265 kubelet[2679]: E0213 20:26:26.067193 2679 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:26.190505 systemd[1]: Started sshd@116-10.0.0.6:22-10.0.0.1:47468.service - OpenSSH per-connection server daemon (10.0.0.1:47468). Feb 13 20:26:26.223949 sshd[4832]: Accepted publickey for core from 10.0.0.1 port 47468 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:26.225318 sshd[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:26.229028 systemd-logind[1521]: New session 117 of user core. Feb 13 20:26:26.240531 systemd[1]: Started session-117.scope - Session 117 of User core. Feb 13 20:26:26.356242 sshd[4832]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:26.359580 systemd[1]: sshd@116-10.0.0.6:22-10.0.0.1:47468.service: Deactivated successfully. Feb 13 20:26:26.361457 systemd-logind[1521]: Session 117 logged out. Waiting for processes to exit. Feb 13 20:26:26.361553 systemd[1]: session-117.scope: Deactivated successfully. Feb 13 20:26:26.362800 systemd-logind[1521]: Removed session 117.