Feb 13 20:50:00.941280 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:50:00.941304 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 20:50:00.941314 kernel: KASLR enabled Feb 13 20:50:00.941321 kernel: efi: EFI v2.7 by EDK II Feb 13 20:50:00.941327 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 20:50:00.941334 kernel: random: crng init done Feb 13 20:50:00.941342 kernel: ACPI: Early table checksum verification disabled Feb 13 20:50:00.941349 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 20:50:00.941356 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 20:50:00.941365 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.941372 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.941379 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.941386 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.941393 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.941402 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.941411 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.941419 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.941426 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.941434 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 20:50:00.941441 kernel: NUMA: Failed to initialise from firmware Feb 13 20:50:00.941449 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:50:00.941456 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 20:50:00.941464 kernel: Zone ranges: Feb 13 20:50:00.941471 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:50:00.941479 kernel: DMA32 empty Feb 13 20:50:00.941507 kernel: Normal empty Feb 13 20:50:00.941514 kernel: Movable zone start for each node Feb 13 20:50:00.941520 kernel: Early memory node ranges Feb 13 20:50:00.941527 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 20:50:00.941533 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 20:50:00.941539 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 20:50:00.941545 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 20:50:00.941552 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 20:50:00.941558 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 20:50:00.941564 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 20:50:00.941570 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:50:00.941577 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 20:50:00.941585 kernel: psci: probing for conduit method from ACPI. Feb 13 20:50:00.941591 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:50:00.941598 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:50:00.941607 kernel: psci: Trusted OS migration not required Feb 13 20:50:00.941614 kernel: psci: SMC Calling Convention v1.1 Feb 13 20:50:00.941621 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 20:50:00.941629 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:50:00.941636 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:50:00.941643 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 20:50:00.941650 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:50:00.941656 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:50:00.941663 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:50:00.941670 kernel: CPU features: detected: Spectre-v4 Feb 13 20:50:00.941676 kernel: CPU features: detected: Spectre-BHB Feb 13 20:50:00.941683 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:50:00.941690 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:50:00.941698 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:50:00.941705 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:50:00.941712 kernel: alternatives: applying boot alternatives Feb 13 20:50:00.941719 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:50:00.941726 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:50:00.941733 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:50:00.941740 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:50:00.941747 kernel: Fallback order for Node 0: 0 Feb 13 20:50:00.941754 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 20:50:00.941761 kernel: Policy zone: DMA Feb 13 20:50:00.941767 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:50:00.941775 kernel: software IO TLB: area num 4. Feb 13 20:50:00.941782 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 20:50:00.941790 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Feb 13 20:50:00.941797 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 20:50:00.941804 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:50:00.941811 kernel: rcu: RCU event tracing is enabled. Feb 13 20:50:00.941818 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 20:50:00.941825 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:50:00.941831 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:50:00.941838 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:50:00.941845 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 20:50:00.941851 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:50:00.941860 kernel: GICv3: 256 SPIs implemented Feb 13 20:50:00.941866 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:50:00.941873 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:50:00.941879 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:50:00.941886 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 20:50:00.941893 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 20:50:00.941900 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 20:50:00.941906 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 20:50:00.941913 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 20:50:00.941920 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 20:50:00.941927 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:50:00.941935 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:50:00.941942 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:50:00.941948 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:50:00.941956 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:50:00.941962 kernel: arm-pv: using stolen time PV Feb 13 20:50:00.941969 kernel: Console: colour dummy device 80x25 Feb 13 20:50:00.941976 kernel: ACPI: Core revision 20230628 Feb 13 20:50:00.941983 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:50:00.941991 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:50:00.941998 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:50:00.942006 kernel: landlock: Up and running. Feb 13 20:50:00.942013 kernel: SELinux: Initializing. Feb 13 20:50:00.942020 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:50:00.942027 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:50:00.942034 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:50:00.942041 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:50:00.942048 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:50:00.942055 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:50:00.942062 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 20:50:00.942070 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 20:50:00.942077 kernel: Remapping and enabling EFI services. Feb 13 20:50:00.942084 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:50:00.942091 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:50:00.942098 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 20:50:00.942105 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 20:50:00.942112 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:50:00.942119 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:50:00.942126 kernel: Detected PIPT I-cache on CPU2 Feb 13 20:50:00.942133 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 20:50:00.942142 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 20:50:00.942149 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:50:00.942160 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 20:50:00.942169 kernel: Detected PIPT I-cache on CPU3 Feb 13 20:50:00.942176 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 20:50:00.942184 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 20:50:00.942191 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:50:00.942198 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 20:50:00.942206 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 20:50:00.942215 kernel: SMP: Total of 4 processors activated. Feb 13 20:50:00.942223 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:50:00.942230 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:50:00.942255 kernel: CPU features: detected: Common not Private translations Feb 13 20:50:00.942264 kernel: CPU features: detected: CRC32 instructions Feb 13 20:50:00.942271 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 20:50:00.942278 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:50:00.942286 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:50:00.942296 kernel: CPU features: detected: Privileged Access Never Feb 13 20:50:00.942303 kernel: CPU features: detected: RAS Extension Support Feb 13 20:50:00.942311 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 20:50:00.942318 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:50:00.942325 kernel: alternatives: applying system-wide alternatives Feb 13 20:50:00.942333 kernel: devtmpfs: initialized Feb 13 20:50:00.942341 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:50:00.942348 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 20:50:00.942356 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:50:00.942364 kernel: SMBIOS 3.0.0 present. Feb 13 20:50:00.942372 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 20:50:00.942379 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:50:00.942386 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:50:00.942394 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:50:00.942401 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:50:00.942409 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:50:00.942416 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Feb 13 20:50:00.942423 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:50:00.942432 kernel: cpuidle: using governor menu Feb 13 20:50:00.942441 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:50:00.942448 kernel: ASID allocator initialised with 32768 entries Feb 13 20:50:00.942456 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:50:00.942463 kernel: Serial: AMBA PL011 UART driver Feb 13 20:50:00.942471 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:50:00.942478 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:50:00.942492 kernel: Modules: 509040 pages in range for PLT usage Feb 13 20:50:00.942500 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:50:00.942510 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:50:00.942525 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:50:00.942533 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:50:00.942553 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:50:00.942560 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:50:00.942568 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:50:00.942576 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:50:00.942583 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:50:00.942591 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:50:00.942600 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:50:00.942607 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:50:00.942614 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:50:00.942622 kernel: ACPI: Interpreter enabled Feb 13 20:50:00.942629 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:50:00.942636 kernel: ACPI: MCFG table detected, 1 entries Feb 13 20:50:00.942644 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:50:00.942651 kernel: printk: console [ttyAMA0] enabled Feb 13 20:50:00.942659 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:50:00.942800 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:50:00.942875 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 20:50:00.942941 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 20:50:00.943005 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 20:50:00.943080 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 20:50:00.943090 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 20:50:00.943097 kernel: PCI host bridge to bus 0000:00 Feb 13 20:50:00.943172 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 20:50:00.943256 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 20:50:00.943318 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 20:50:00.943383 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:50:00.943463 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 20:50:00.943622 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:50:00.943702 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 20:50:00.943769 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 20:50:00.943834 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:50:00.943899 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:50:00.943965 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 20:50:00.944031 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 20:50:00.944090 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 20:50:00.944146 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 20:50:00.944212 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 20:50:00.944221 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 20:50:00.944229 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 20:50:00.944244 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 20:50:00.944252 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 20:50:00.944259 kernel: iommu: Default domain type: Translated Feb 13 20:50:00.944267 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:50:00.944274 kernel: efivars: Registered efivars operations Feb 13 20:50:00.944285 kernel: vgaarb: loaded Feb 13 20:50:00.944292 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:50:00.944300 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:50:00.944308 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:50:00.944316 kernel: pnp: PnP ACPI init Feb 13 20:50:00.944400 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 20:50:00.944411 kernel: pnp: PnP ACPI: found 1 devices Feb 13 20:50:00.944419 kernel: NET: Registered PF_INET protocol family Feb 13 20:50:00.944429 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:50:00.944436 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:50:00.944444 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:50:00.944452 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:50:00.944459 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:50:00.944467 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:50:00.944474 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:50:00.944482 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:50:00.944516 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:50:00.944525 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:50:00.944533 kernel: kvm [1]: HYP mode not available Feb 13 20:50:00.944540 kernel: Initialise system trusted keyrings Feb 13 20:50:00.944548 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:50:00.944557 kernel: Key type asymmetric registered Feb 13 20:50:00.944564 kernel: Asymmetric key parser 'x509' registered Feb 13 20:50:00.944572 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:50:00.944579 kernel: io scheduler mq-deadline registered Feb 13 20:50:00.944586 kernel: io scheduler kyber registered Feb 13 20:50:00.944595 kernel: io scheduler bfq registered Feb 13 20:50:00.944603 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 20:50:00.944610 kernel: ACPI: button: Power Button [PWRB] Feb 13 20:50:00.944618 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 20:50:00.944691 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 20:50:00.944701 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:50:00.944709 kernel: thunder_xcv, ver 1.0 Feb 13 20:50:00.944716 kernel: thunder_bgx, ver 1.0 Feb 13 20:50:00.944724 kernel: nicpf, ver 1.0 Feb 13 20:50:00.944733 kernel: nicvf, ver 1.0 Feb 13 20:50:00.944805 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:50:00.944879 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:50:00 UTC (1739479800) Feb 13 20:50:00.944889 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:50:00.944897 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 20:50:00.944904 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:50:00.944912 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:50:00.944919 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:50:00.944928 kernel: Segment Routing with IPv6 Feb 13 20:50:00.944936 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:50:00.944943 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:50:00.944951 kernel: Key type dns_resolver registered Feb 13 20:50:00.944958 kernel: registered taskstats version 1 Feb 13 20:50:00.944965 kernel: Loading compiled-in X.509 certificates Feb 13 20:50:00.944973 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 20:50:00.944980 kernel: Key type .fscrypt registered Feb 13 20:50:00.944987 kernel: Key type fscrypt-provisioning registered Feb 13 20:50:00.944996 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:50:00.945004 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:50:00.945012 kernel: ima: No architecture policies found Feb 13 20:50:00.945019 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:50:00.945026 kernel: clk: Disabling unused clocks Feb 13 20:50:00.945034 kernel: Freeing unused kernel memory: 39360K Feb 13 20:50:00.945041 kernel: Run /init as init process Feb 13 20:50:00.945048 kernel: with arguments: Feb 13 20:50:00.945055 kernel: /init Feb 13 20:50:00.945064 kernel: with environment: Feb 13 20:50:00.945071 kernel: HOME=/ Feb 13 20:50:00.945078 kernel: TERM=linux Feb 13 20:50:00.945085 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:50:00.945095 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:50:00.945104 systemd[1]: Detected virtualization kvm. Feb 13 20:50:00.945112 systemd[1]: Detected architecture arm64. Feb 13 20:50:00.945119 systemd[1]: Running in initrd. Feb 13 20:50:00.945129 systemd[1]: No hostname configured, using default hostname. Feb 13 20:50:00.945136 systemd[1]: Hostname set to . Feb 13 20:50:00.945144 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:50:00.945152 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:50:00.945160 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:50:00.945168 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:50:00.945177 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:50:00.945185 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:50:00.945194 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:50:00.945203 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:50:00.945212 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:50:00.945220 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:50:00.945228 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:50:00.945243 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:50:00.945252 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:50:00.945261 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:50:00.945269 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:50:00.945277 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:50:00.945285 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:50:00.945293 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:50:00.945301 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:50:00.945310 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:50:00.945318 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:50:00.945328 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:50:00.945336 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:50:00.945344 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:50:00.945352 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:50:00.945360 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:50:00.945368 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:50:00.945376 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:50:00.945384 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:50:00.945392 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:50:00.945402 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:50:00.945410 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:50:00.945418 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:50:00.945426 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:50:00.945435 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:50:00.945445 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:50:00.945453 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:50:00.945462 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:50:00.945498 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 20:50:00.945521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:50:00.945530 systemd-journald[237]: Journal started Feb 13 20:50:00.945552 systemd-journald[237]: Runtime Journal (/run/log/journal/b229b2329ed04bcb81002cfe56b6a41c) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:50:00.933083 systemd-modules-load[238]: Inserted module 'overlay' Feb 13 20:50:00.947881 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:50:00.951582 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:50:00.952265 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:50:00.954434 kernel: Bridge firewalling registered Feb 13 20:50:00.952650 systemd-modules-load[238]: Inserted module 'br_netfilter' Feb 13 20:50:00.955743 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:50:00.959156 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:50:00.960935 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:50:00.963114 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:50:00.965332 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:50:00.967722 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:50:00.976675 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:50:00.978552 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:50:00.986202 dracut-cmdline[276]: dracut-dracut-053 Feb 13 20:50:00.988576 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:50:01.006474 systemd-resolved[278]: Positive Trust Anchors: Feb 13 20:50:01.006501 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:50:01.006534 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:50:01.011211 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 13 20:50:01.016011 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:50:01.017129 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:50:01.055513 kernel: SCSI subsystem initialized Feb 13 20:50:01.060504 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:50:01.068543 kernel: iscsi: registered transport (tcp) Feb 13 20:50:01.081502 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:50:01.081525 kernel: QLogic iSCSI HBA Driver Feb 13 20:50:01.128342 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:50:01.142656 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:50:01.157663 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:50:01.157707 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:50:01.158508 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:50:01.206509 kernel: raid6: neonx8 gen() 15698 MB/s Feb 13 20:50:01.223499 kernel: raid6: neonx4 gen() 15650 MB/s Feb 13 20:50:01.240498 kernel: raid6: neonx2 gen() 13261 MB/s Feb 13 20:50:01.257496 kernel: raid6: neonx1 gen() 10479 MB/s Feb 13 20:50:01.274505 kernel: raid6: int64x8 gen() 6950 MB/s Feb 13 20:50:01.291499 kernel: raid6: int64x4 gen() 7344 MB/s Feb 13 20:50:01.308501 kernel: raid6: int64x2 gen() 6124 MB/s Feb 13 20:50:01.325503 kernel: raid6: int64x1 gen() 5052 MB/s Feb 13 20:50:01.325520 kernel: raid6: using algorithm neonx8 gen() 15698 MB/s Feb 13 20:50:01.342508 kernel: raid6: .... xor() 11906 MB/s, rmw enabled Feb 13 20:50:01.342526 kernel: raid6: using neon recovery algorithm Feb 13 20:50:01.349859 kernel: xor: measuring software checksum speed Feb 13 20:50:01.349873 kernel: 8regs : 19816 MB/sec Feb 13 20:50:01.349882 kernel: 32regs : 19674 MB/sec Feb 13 20:50:01.350797 kernel: arm64_neon : 26989 MB/sec Feb 13 20:50:01.350810 kernel: xor: using function: arm64_neon (26989 MB/sec) Feb 13 20:50:01.407508 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:50:01.418945 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:50:01.431503 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:50:01.442944 systemd-udevd[461]: Using default interface naming scheme 'v255'. Feb 13 20:50:01.446245 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:50:01.448657 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:50:01.466760 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Feb 13 20:50:01.498984 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:50:01.510660 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:50:01.552133 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:50:01.559651 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:50:01.572812 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:50:01.575639 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:50:01.576757 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:50:01.578352 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:50:01.586652 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:50:01.595853 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 20:50:01.610631 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 20:50:01.610740 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:50:01.610759 kernel: GPT:9289727 != 19775487 Feb 13 20:50:01.610769 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:50:01.610778 kernel: GPT:9289727 != 19775487 Feb 13 20:50:01.610789 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:50:01.610799 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:50:01.597095 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:50:01.616069 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:50:01.616188 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:50:01.619017 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:50:01.620134 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:50:01.620276 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:50:01.622699 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:50:01.630507 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (506) Feb 13 20:50:01.632515 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (512) Feb 13 20:50:01.632732 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:50:01.644787 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:50:01.645986 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:50:01.654680 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:50:01.658986 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:50:01.662609 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:50:01.663477 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:50:01.677706 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:50:01.679285 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:50:01.683122 disk-uuid[551]: Primary Header is updated. Feb 13 20:50:01.683122 disk-uuid[551]: Secondary Entries is updated. Feb 13 20:50:01.683122 disk-uuid[551]: Secondary Header is updated. Feb 13 20:50:01.686510 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:50:01.698533 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:50:01.702854 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:50:02.698520 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:50:02.699779 disk-uuid[552]: The operation has completed successfully. Feb 13 20:50:02.723459 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:50:02.723569 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:50:02.743692 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:50:02.746747 sh[575]: Success Feb 13 20:50:02.760515 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:50:02.791802 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:50:02.805756 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:50:02.807789 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:50:02.818717 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 20:50:02.818759 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:50:02.818770 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:50:02.818789 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:50:02.819503 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:50:02.822470 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:50:02.823522 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:50:02.828684 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:50:02.829935 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:50:02.838008 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:50:02.838146 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:50:02.838161 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:50:02.840511 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:50:02.847592 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:50:02.848951 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:50:02.856342 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:50:02.860660 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:50:02.924885 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:50:02.934685 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:50:02.971724 systemd-networkd[765]: lo: Link UP Feb 13 20:50:02.971733 systemd-networkd[765]: lo: Gained carrier Feb 13 20:50:02.972441 systemd-networkd[765]: Enumeration completed Feb 13 20:50:02.973016 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:50:02.973988 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:50:02.973991 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:50:02.974924 systemd-networkd[765]: eth0: Link UP Feb 13 20:50:02.974927 systemd-networkd[765]: eth0: Gained carrier Feb 13 20:50:02.974934 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:50:02.975064 systemd[1]: Reached target network.target - Network. Feb 13 20:50:02.982556 ignition[670]: Ignition 2.19.0 Feb 13 20:50:02.982563 ignition[670]: Stage: fetch-offline Feb 13 20:50:02.982600 ignition[670]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:02.982608 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:02.983091 ignition[670]: parsed url from cmdline: "" Feb 13 20:50:02.983095 ignition[670]: no config URL provided Feb 13 20:50:02.983101 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:50:02.983112 ignition[670]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:50:02.983136 ignition[670]: op(1): [started] loading QEMU firmware config module Feb 13 20:50:02.983142 ignition[670]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 20:50:02.995545 ignition[670]: op(1): [finished] loading QEMU firmware config module Feb 13 20:50:02.997548 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:50:03.017896 ignition[670]: parsing config with SHA512: ffece3318dd1c2db20b59d8eb848a9f5e9e8f9760cedb60ef71789a5034982a4919eb91478542192f8c72888f6a1ae0731e9d8003a7adc9744c7e12bc9a18d3b Feb 13 20:50:03.021845 unknown[670]: fetched base config from "system" Feb 13 20:50:03.021856 unknown[670]: fetched user config from "qemu" Feb 13 20:50:03.022344 ignition[670]: fetch-offline: fetch-offline passed Feb 13 20:50:03.022454 systemd-resolved[278]: Detected conflict on linux IN A 10.0.0.7 Feb 13 20:50:03.022412 ignition[670]: Ignition finished successfully Feb 13 20:50:03.022461 systemd-resolved[278]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Feb 13 20:50:03.023587 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:50:03.025772 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:50:03.032668 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:50:03.042794 ignition[771]: Ignition 2.19.0 Feb 13 20:50:03.042805 ignition[771]: Stage: kargs Feb 13 20:50:03.042971 ignition[771]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:03.042980 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:03.043865 ignition[771]: kargs: kargs passed Feb 13 20:50:03.043911 ignition[771]: Ignition finished successfully Feb 13 20:50:03.045864 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:50:03.048042 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:50:03.061835 ignition[779]: Ignition 2.19.0 Feb 13 20:50:03.061846 ignition[779]: Stage: disks Feb 13 20:50:03.062020 ignition[779]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:03.062029 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:03.062962 ignition[779]: disks: disks passed Feb 13 20:50:03.063009 ignition[779]: Ignition finished successfully Feb 13 20:50:03.066001 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:50:03.067276 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:50:03.069576 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:50:03.070945 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:50:03.072284 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:50:03.073796 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:50:03.084633 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:50:03.094062 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:50:03.097601 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:50:03.099360 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:50:03.142512 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 20:50:03.142603 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:50:03.143656 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:50:03.153566 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:50:03.155109 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:50:03.156323 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:50:03.156362 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:50:03.161372 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Feb 13 20:50:03.156384 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:50:03.164235 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:50:03.164258 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:50:03.164269 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:50:03.162766 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:50:03.166438 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:50:03.167995 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:50:03.169464 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:50:03.213634 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:50:03.217640 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:50:03.221555 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:50:03.224943 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:50:03.301126 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:50:03.310619 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:50:03.311994 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:50:03.317501 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:50:03.335188 ignition[912]: INFO : Ignition 2.19.0 Feb 13 20:50:03.335188 ignition[912]: INFO : Stage: mount Feb 13 20:50:03.336515 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:03.336515 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:03.336515 ignition[912]: INFO : mount: mount passed Feb 13 20:50:03.336515 ignition[912]: INFO : Ignition finished successfully Feb 13 20:50:03.338276 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:50:03.353612 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:50:03.354548 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:50:03.816605 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:50:03.825666 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:50:03.831300 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (925) Feb 13 20:50:03.831331 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:50:03.831342 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:50:03.831971 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:50:03.834504 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:50:03.835348 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:50:03.851052 ignition[942]: INFO : Ignition 2.19.0 Feb 13 20:50:03.851052 ignition[942]: INFO : Stage: files Feb 13 20:50:03.852520 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:03.852520 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:03.852520 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:50:03.855254 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:50:03.855254 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:50:03.857547 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:50:03.858595 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:50:03.858595 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:50:03.858087 unknown[942]: wrote ssh authorized keys file for user: core Feb 13 20:50:03.861555 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:50:03.861555 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:50:03.861555 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:50:03.861555 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 20:50:04.252610 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:50:04.618729 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:50:04.618729 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:50:04.621631 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:50:04.621631 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:50:04.621631 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:50:04.621631 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:50:04.621631 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:50:04.621631 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:50:04.621631 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:50:04.621631 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:50:04.621631 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:50:04.621631 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:50:04.621631 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:50:04.621631 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:50:04.621631 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 20:50:04.684706 systemd-networkd[765]: eth0: Gained IPv6LL Feb 13 20:50:04.948035 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:50:05.244421 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:50:05.244421 ignition[942]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 20:50:05.247148 ignition[942]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:50:05.247148 ignition[942]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:50:05.247148 ignition[942]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 20:50:05.247148 ignition[942]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 20:50:05.247148 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:50:05.247148 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:50:05.247148 ignition[942]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 20:50:05.247148 ignition[942]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Feb 13 20:50:05.247148 ignition[942]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:50:05.247148 ignition[942]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:50:05.247148 ignition[942]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Feb 13 20:50:05.247148 ignition[942]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 20:50:05.270495 ignition[942]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:50:05.274314 ignition[942]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:50:05.276728 ignition[942]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 20:50:05.276728 ignition[942]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:50:05.276728 ignition[942]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:50:05.276728 ignition[942]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:50:05.276728 ignition[942]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:50:05.276728 ignition[942]: INFO : files: files passed Feb 13 20:50:05.276728 ignition[942]: INFO : Ignition finished successfully Feb 13 20:50:05.277178 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:50:05.286660 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:50:05.289682 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:50:05.290861 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:50:05.290946 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:50:05.296683 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 20:50:05.298774 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:50:05.298774 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:50:05.301801 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:50:05.302236 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:50:05.304536 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:50:05.312666 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:50:05.334281 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:50:05.334426 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:50:05.336466 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:50:05.337981 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:50:05.339428 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:50:05.340285 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:50:05.357730 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:50:05.368677 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:50:05.376752 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:50:05.377821 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:50:05.379518 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:50:05.381019 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:50:05.381146 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:50:05.383122 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:50:05.384610 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:50:05.385834 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:50:05.387125 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:50:05.388576 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:50:05.390094 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:50:05.391495 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:50:05.393106 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:50:05.394678 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:50:05.396116 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:50:05.397295 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:50:05.397415 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:50:05.399272 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:50:05.400780 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:50:05.402234 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:50:05.406546 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:50:05.407545 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:50:05.407664 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:50:05.410024 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:50:05.410134 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:50:05.411690 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:50:05.412949 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:50:05.416532 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:50:05.417464 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:50:05.419024 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:50:05.420198 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:50:05.420304 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:50:05.421425 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:50:05.421525 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:50:05.422662 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:50:05.422775 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:50:05.424079 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:50:05.424178 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:50:05.433636 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:50:05.434297 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:50:05.434417 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:50:05.437051 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:50:05.437938 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:50:05.438049 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:50:05.439564 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:50:05.439709 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:50:05.444362 ignition[998]: INFO : Ignition 2.19.0 Feb 13 20:50:05.444362 ignition[998]: INFO : Stage: umount Feb 13 20:50:05.445755 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:05.445755 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:05.446369 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:50:05.448853 ignition[998]: INFO : umount: umount passed Feb 13 20:50:05.448853 ignition[998]: INFO : Ignition finished successfully Feb 13 20:50:05.446562 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:50:05.450629 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:50:05.451055 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:50:05.451164 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:50:05.453247 systemd[1]: Stopped target network.target - Network. Feb 13 20:50:05.454214 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:50:05.454279 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:50:05.455674 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:50:05.455713 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:50:05.457213 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:50:05.457253 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:50:05.458471 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:50:05.458520 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:50:05.460045 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:50:05.461316 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:50:05.468797 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:50:05.468912 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:50:05.469560 systemd-networkd[765]: eth0: DHCPv6 lease lost Feb 13 20:50:05.471236 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:50:05.471339 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:50:05.473086 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:50:05.473138 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:50:05.483655 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:50:05.484318 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:50:05.484376 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:50:05.485825 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:50:05.485863 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:50:05.487237 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:50:05.487277 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:50:05.488900 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:50:05.488934 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:50:05.490446 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:50:05.499909 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:50:05.500024 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:50:05.511683 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:50:05.511839 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:50:05.513861 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:50:05.514044 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:50:05.515817 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:50:05.515877 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:50:05.517365 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:50:05.517397 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:50:05.518850 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:50:05.518897 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:50:05.521033 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:50:05.521071 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:50:05.523130 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:50:05.523170 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:50:05.525377 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:50:05.525418 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:50:05.536640 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:50:05.537527 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:50:05.537580 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:50:05.539369 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:50:05.539412 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:50:05.541770 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:50:05.542617 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:50:05.543776 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:50:05.545823 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:50:05.555149 systemd[1]: Switching root. Feb 13 20:50:05.585217 systemd-journald[237]: Journal stopped Feb 13 20:50:06.284320 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 20:50:06.284373 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:50:06.284385 kernel: SELinux: policy capability open_perms=1 Feb 13 20:50:06.284396 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:50:06.284405 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:50:06.284419 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:50:06.284430 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:50:06.284442 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:50:06.284454 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:50:06.284464 kernel: audit: type=1403 audit(1739479805.774:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:50:06.284474 systemd[1]: Successfully loaded SELinux policy in 31.188ms. Feb 13 20:50:06.284512 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.232ms. Feb 13 20:50:06.284525 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:50:06.284536 systemd[1]: Detected virtualization kvm. Feb 13 20:50:06.284547 systemd[1]: Detected architecture arm64. Feb 13 20:50:06.284557 systemd[1]: Detected first boot. Feb 13 20:50:06.284569 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:50:06.284580 zram_generator::config[1065]: No configuration found. Feb 13 20:50:06.284591 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:50:06.284601 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:50:06.284615 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:50:06.284626 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:50:06.284637 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:50:06.284647 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:50:06.284659 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:50:06.284670 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:50:06.284680 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:50:06.284691 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:50:06.284707 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:50:06.284717 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:50:06.284728 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:50:06.284738 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:50:06.284748 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:50:06.284761 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:50:06.284771 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:50:06.284782 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 20:50:06.284792 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:50:06.284803 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:50:06.284813 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:50:06.284825 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:50:06.284836 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:50:06.284849 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:50:06.284861 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:50:06.284871 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:50:06.284881 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:50:06.284892 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:50:06.284903 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:50:06.284914 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:50:06.284924 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:50:06.284935 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:50:06.284945 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:50:06.284957 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:50:06.284967 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:50:06.284978 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:50:06.284988 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:50:06.284998 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:50:06.285009 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:50:06.285020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:50:06.285031 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:50:06.285043 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:50:06.285054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:50:06.285064 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:50:06.285075 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:50:06.285085 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:50:06.285096 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:50:06.285107 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:50:06.285117 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 20:50:06.285130 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 20:50:06.285141 kernel: loop: module loaded Feb 13 20:50:06.285151 kernel: fuse: init (API version 7.39) Feb 13 20:50:06.285161 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:50:06.285171 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:50:06.285199 kernel: ACPI: bus type drm_connector registered Feb 13 20:50:06.285210 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:50:06.285221 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:50:06.285233 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:50:06.285274 systemd-journald[1143]: Collecting audit messages is disabled. Feb 13 20:50:06.285299 systemd-journald[1143]: Journal started Feb 13 20:50:06.285320 systemd-journald[1143]: Runtime Journal (/run/log/journal/b229b2329ed04bcb81002cfe56b6a41c) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:50:06.290359 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:50:06.290399 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:50:06.291360 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:50:06.292266 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:50:06.293063 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:50:06.293955 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:50:06.294900 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:50:06.295938 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:50:06.297070 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:50:06.298208 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:50:06.298373 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:50:06.299540 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:50:06.299693 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:50:06.300741 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:50:06.300897 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:50:06.301942 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:50:06.302090 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:50:06.303295 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:50:06.303447 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:50:06.304589 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:50:06.304795 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:50:06.306110 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:50:06.307278 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:50:06.308814 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:50:06.319519 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:50:06.328608 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:50:06.330476 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:50:06.331301 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:50:06.333674 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:50:06.337274 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:50:06.339648 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:50:06.341623 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:50:06.342608 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:50:06.344615 systemd-journald[1143]: Time spent on flushing to /var/log/journal/b229b2329ed04bcb81002cfe56b6a41c is 18.658ms for 844 entries. Feb 13 20:50:06.344615 systemd-journald[1143]: System Journal (/var/log/journal/b229b2329ed04bcb81002cfe56b6a41c) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:50:06.377412 systemd-journald[1143]: Received client request to flush runtime journal. Feb 13 20:50:06.344697 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:50:06.349683 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:50:06.352501 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:50:06.353678 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:50:06.354679 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:50:06.355918 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:50:06.358460 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:50:06.368695 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:50:06.370199 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:50:06.379435 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Feb 13 20:50:06.379449 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Feb 13 20:50:06.379723 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:50:06.383823 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:50:06.385063 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:50:06.393705 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:50:06.412039 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:50:06.422619 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:50:06.434161 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Feb 13 20:50:06.434184 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Feb 13 20:50:06.437911 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:50:06.799968 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:50:06.812678 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:50:06.834558 systemd-udevd[1223]: Using default interface naming scheme 'v255'. Feb 13 20:50:06.849005 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:50:06.857159 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:50:06.876674 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:50:06.880059 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Feb 13 20:50:06.890551 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1241) Feb 13 20:50:06.913627 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:50:06.969086 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:50:06.982557 systemd-networkd[1230]: lo: Link UP Feb 13 20:50:06.982566 systemd-networkd[1230]: lo: Gained carrier Feb 13 20:50:06.983289 systemd-networkd[1230]: Enumeration completed Feb 13 20:50:06.985683 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:50:06.986624 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:50:06.986635 systemd-networkd[1230]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:50:06.987251 systemd-networkd[1230]: eth0: Link UP Feb 13 20:50:06.987263 systemd-networkd[1230]: eth0: Gained carrier Feb 13 20:50:06.987276 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:50:06.995633 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:50:07.000971 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:50:07.004285 systemd-networkd[1230]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:50:07.005923 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:50:07.008161 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:50:07.025707 lvm[1261]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:50:07.045378 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:50:07.052868 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:50:07.053935 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:50:07.062714 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:50:07.067274 lvm[1269]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:50:07.100531 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:50:07.101995 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:50:07.102991 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:50:07.103034 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:50:07.103827 systemd[1]: Reached target machines.target - Containers. Feb 13 20:50:07.105683 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:50:07.117646 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:50:07.119718 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:50:07.120600 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:50:07.121549 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:50:07.123390 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:50:07.126656 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:50:07.128282 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:50:07.135246 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:50:07.142514 kernel: loop0: detected capacity change from 0 to 114432 Feb 13 20:50:07.147420 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:50:07.148411 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:50:07.154512 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:50:07.188605 kernel: loop1: detected capacity change from 0 to 114328 Feb 13 20:50:07.225549 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 20:50:07.269524 kernel: loop3: detected capacity change from 0 to 114432 Feb 13 20:50:07.274529 kernel: loop4: detected capacity change from 0 to 114328 Feb 13 20:50:07.279520 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 20:50:07.283168 (sd-merge)[1290]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 20:50:07.283647 (sd-merge)[1290]: Merged extensions into '/usr'. Feb 13 20:50:07.287295 systemd[1]: Reloading requested from client PID 1277 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:50:07.287310 systemd[1]: Reloading... Feb 13 20:50:07.332512 zram_generator::config[1319]: No configuration found. Feb 13 20:50:07.400059 ldconfig[1273]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:50:07.440110 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:50:07.483246 systemd[1]: Reloading finished in 195 ms. Feb 13 20:50:07.498295 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:50:07.499689 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:50:07.521684 systemd[1]: Starting ensure-sysext.service... Feb 13 20:50:07.526511 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:50:07.532363 systemd[1]: Reloading requested from client PID 1360 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:50:07.532379 systemd[1]: Reloading... Feb 13 20:50:07.548646 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:50:07.548917 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:50:07.550891 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:50:07.551280 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Feb 13 20:50:07.551338 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Feb 13 20:50:07.554191 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:50:07.554203 systemd-tmpfiles[1361]: Skipping /boot Feb 13 20:50:07.575980 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:50:07.575999 systemd-tmpfiles[1361]: Skipping /boot Feb 13 20:50:07.584943 zram_generator::config[1396]: No configuration found. Feb 13 20:50:07.671948 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:50:07.714737 systemd[1]: Reloading finished in 182 ms. Feb 13 20:50:07.730284 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:50:07.746999 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:50:07.749444 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:50:07.752001 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:50:07.755392 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:50:07.757757 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:50:07.763841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:50:07.771748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:50:07.774726 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:50:07.780162 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:50:07.781778 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:50:07.783334 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:50:07.784852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:50:07.785005 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:50:07.786581 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:50:07.786734 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:50:07.789353 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:50:07.789714 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:50:07.799198 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:50:07.804714 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:50:07.806560 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:50:07.809028 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:50:07.813736 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:50:07.816789 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:50:07.817780 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:50:07.820116 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:50:07.825908 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:50:07.826069 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:50:07.827576 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:50:07.827719 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:50:07.829113 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:50:07.829272 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:50:07.831990 systemd[1]: Finished ensure-sysext.service. Feb 13 20:50:07.836319 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:50:07.839734 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:50:07.842268 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:50:07.844260 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:50:07.844346 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:50:07.856242 augenrules[1486]: No rules Feb 13 20:50:07.856766 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:50:07.858249 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:50:07.859847 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:50:07.862324 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:50:07.880896 systemd-resolved[1436]: Positive Trust Anchors: Feb 13 20:50:07.882656 systemd-resolved[1436]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:50:07.882695 systemd-resolved[1436]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:50:07.888690 systemd-resolved[1436]: Defaulting to hostname 'linux'. Feb 13 20:50:07.892263 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:50:07.893402 systemd[1]: Reached target network.target - Network. Feb 13 20:50:07.894154 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:50:07.906553 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:50:07.907648 systemd-timesyncd[1485]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 20:50:07.907704 systemd-timesyncd[1485]: Initial clock synchronization to Thu 2025-02-13 20:50:07.921965 UTC. Feb 13 20:50:07.908034 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:50:07.908972 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:50:07.909895 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:50:07.910821 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:50:07.911794 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:50:07.911829 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:50:07.912536 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:50:07.913437 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:50:07.914414 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:50:07.915396 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:50:07.917126 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:50:07.919458 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:50:07.921582 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:50:07.929548 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:50:07.930382 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:50:07.931214 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:50:07.932062 systemd[1]: System is tainted: cgroupsv1 Feb 13 20:50:07.932110 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:50:07.932133 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:50:07.933309 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:50:07.935234 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:50:07.936996 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:50:07.941661 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:50:07.942470 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:50:07.943527 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:50:07.950752 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:50:07.951641 jq[1501]: false Feb 13 20:50:07.953424 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:50:07.958707 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:50:07.964888 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:50:07.966965 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:50:07.970233 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:50:07.973882 extend-filesystems[1503]: Found loop3 Feb 13 20:50:07.973882 extend-filesystems[1503]: Found loop4 Feb 13 20:50:07.973882 extend-filesystems[1503]: Found loop5 Feb 13 20:50:07.973882 extend-filesystems[1503]: Found vda Feb 13 20:50:07.973882 extend-filesystems[1503]: Found vda1 Feb 13 20:50:07.973882 extend-filesystems[1503]: Found vda2 Feb 13 20:50:07.973882 extend-filesystems[1503]: Found vda3 Feb 13 20:50:07.973882 extend-filesystems[1503]: Found usr Feb 13 20:50:07.973882 extend-filesystems[1503]: Found vda4 Feb 13 20:50:07.973882 extend-filesystems[1503]: Found vda6 Feb 13 20:50:07.973882 extend-filesystems[1503]: Found vda7 Feb 13 20:50:07.973882 extend-filesystems[1503]: Found vda9 Feb 13 20:50:07.973882 extend-filesystems[1503]: Checking size of /dev/vda9 Feb 13 20:50:07.972791 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:50:08.012441 extend-filesystems[1503]: Resized partition /dev/vda9 Feb 13 20:50:08.016211 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 20:50:07.989860 dbus-daemon[1500]: [system] SELinux support is enabled Feb 13 20:50:07.979649 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:50:08.023581 extend-filesystems[1533]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:50:08.026601 update_engine[1518]: I20250213 20:50:08.025254 1518 main.cc:92] Flatcar Update Engine starting Feb 13 20:50:07.979865 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:50:08.027011 jq[1522]: true Feb 13 20:50:07.980133 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:50:07.984647 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:50:07.991736 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:50:08.027541 jq[1532]: true Feb 13 20:50:07.994915 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:50:07.995147 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:50:08.010379 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:50:08.010409 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:50:08.013586 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:50:08.013606 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:50:08.023659 (ntainerd)[1534]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:50:08.031531 update_engine[1518]: I20250213 20:50:08.031298 1518 update_check_scheduler.cc:74] Next update check in 2m37s Feb 13 20:50:08.033684 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:50:08.037508 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1241) Feb 13 20:50:08.037573 tar[1530]: linux-arm64/helm Feb 13 20:50:08.039131 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:50:08.040807 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:50:08.062614 systemd-logind[1517]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 20:50:08.064199 systemd-logind[1517]: New seat seat0. Feb 13 20:50:08.066174 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:50:08.069504 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 20:50:08.076635 systemd-networkd[1230]: eth0: Gained IPv6LL Feb 13 20:50:08.080196 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:50:08.082062 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:50:08.084754 extend-filesystems[1533]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:50:08.084754 extend-filesystems[1533]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:50:08.084754 extend-filesystems[1533]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 20:50:08.090747 extend-filesystems[1503]: Resized filesystem in /dev/vda9 Feb 13 20:50:08.091784 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 20:50:08.091864 bash[1561]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:50:08.096809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:08.102785 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:50:08.107101 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:50:08.107484 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:50:08.109352 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:50:08.119053 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:50:08.135768 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 20:50:08.136022 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 20:50:08.137880 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:50:08.143844 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:50:08.160675 locksmithd[1560]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:50:08.301748 containerd[1534]: time="2025-02-13T20:50:08.301014714Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:50:08.331288 containerd[1534]: time="2025-02-13T20:50:08.331237260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:08.332697 containerd[1534]: time="2025-02-13T20:50:08.332661348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:50:08.332697 containerd[1534]: time="2025-02-13T20:50:08.332695579Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:50:08.332787 containerd[1534]: time="2025-02-13T20:50:08.332712835Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:50:08.332881 containerd[1534]: time="2025-02-13T20:50:08.332859928Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:50:08.332906 containerd[1534]: time="2025-02-13T20:50:08.332884991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:08.332976 containerd[1534]: time="2025-02-13T20:50:08.332941922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:50:08.332976 containerd[1534]: time="2025-02-13T20:50:08.332958617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:08.333176 containerd[1534]: time="2025-02-13T20:50:08.333154995Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:50:08.333199 containerd[1534]: time="2025-02-13T20:50:08.333176214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:08.333199 containerd[1534]: time="2025-02-13T20:50:08.333189827Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:50:08.333239 containerd[1534]: time="2025-02-13T20:50:08.333200516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:08.333285 containerd[1534]: time="2025-02-13T20:50:08.333268898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:08.333880 containerd[1534]: time="2025-02-13T20:50:08.333457789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:08.333880 containerd[1534]: time="2025-02-13T20:50:08.333613650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:50:08.333880 containerd[1534]: time="2025-02-13T20:50:08.333629424Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:50:08.333880 containerd[1534]: time="2025-02-13T20:50:08.333703491Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:50:08.333880 containerd[1534]: time="2025-02-13T20:50:08.333741246Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:50:08.341694 containerd[1534]: time="2025-02-13T20:50:08.341653204Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:50:08.342524 containerd[1534]: time="2025-02-13T20:50:08.341904632Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:50:08.342524 containerd[1534]: time="2025-02-13T20:50:08.341932617Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:50:08.342524 containerd[1534]: time="2025-02-13T20:50:08.341947951Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:50:08.342524 containerd[1534]: time="2025-02-13T20:50:08.341962404Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:50:08.342524 containerd[1534]: time="2025-02-13T20:50:08.342118185Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:50:08.342524 containerd[1534]: time="2025-02-13T20:50:08.342461816Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:50:08.342702 containerd[1534]: time="2025-02-13T20:50:08.342611191Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:50:08.342702 containerd[1534]: time="2025-02-13T20:50:08.342629248Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:50:08.342702 containerd[1534]: time="2025-02-13T20:50:08.342645142Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:50:08.342702 containerd[1534]: time="2025-02-13T20:50:08.342659395Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:50:08.342702 containerd[1534]: time="2025-02-13T20:50:08.342674128Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:50:08.342702 containerd[1534]: time="2025-02-13T20:50:08.342690983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:50:08.342702 containerd[1534]: time="2025-02-13T20:50:08.342705076Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:50:08.342830 containerd[1534]: time="2025-02-13T20:50:08.342720370Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:50:08.342830 containerd[1534]: time="2025-02-13T20:50:08.342735023Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:50:08.342830 containerd[1534]: time="2025-02-13T20:50:08.342746794Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:50:08.342830 containerd[1534]: time="2025-02-13T20:50:08.342758805Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:50:08.342830 containerd[1534]: time="2025-02-13T20:50:08.342780825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:50:08.342830 containerd[1534]: time="2025-02-13T20:50:08.342795118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:50:08.342830 containerd[1534]: time="2025-02-13T20:50:08.342807609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:50:08.342830 containerd[1534]: time="2025-02-13T20:50:08.342819620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:50:08.342830 containerd[1534]: time="2025-02-13T20:50:08.342831591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:50:08.342981 containerd[1534]: time="2025-02-13T20:50:08.342845283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:50:08.342981 containerd[1534]: time="2025-02-13T20:50:08.342857894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:50:08.342981 containerd[1534]: time="2025-02-13T20:50:08.342870666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:50:08.342981 containerd[1534]: time="2025-02-13T20:50:08.342884118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:50:08.342981 containerd[1534]: time="2025-02-13T20:50:08.342898091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:50:08.342981 containerd[1534]: time="2025-02-13T20:50:08.342909901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:50:08.342981 containerd[1534]: time="2025-02-13T20:50:08.342921592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:50:08.342981 containerd[1534]: time="2025-02-13T20:50:08.342933243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:50:08.342981 containerd[1534]: time="2025-02-13T20:50:08.342949457Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:50:08.342981 containerd[1534]: time="2025-02-13T20:50:08.342969596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:50:08.342981 containerd[1534]: time="2025-02-13T20:50:08.342981686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:50:08.343159 containerd[1534]: time="2025-02-13T20:50:08.342992576Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:50:08.343159 containerd[1534]: time="2025-02-13T20:50:08.343150319Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:50:08.343195 containerd[1534]: time="2025-02-13T20:50:08.343169256Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:50:08.343195 containerd[1534]: time="2025-02-13T20:50:08.343180226Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:50:08.343234 containerd[1534]: time="2025-02-13T20:50:08.343192157Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:50:08.343234 containerd[1534]: time="2025-02-13T20:50:08.343201886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:50:08.343234 containerd[1534]: time="2025-02-13T20:50:08.343214817Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:50:08.343234 containerd[1534]: time="2025-02-13T20:50:08.343224866Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:50:08.343298 containerd[1534]: time="2025-02-13T20:50:08.343235316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:50:08.344363 containerd[1534]: time="2025-02-13T20:50:08.343602408Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:50:08.344363 containerd[1534]: time="2025-02-13T20:50:08.343665865Z" level=info msg="Connect containerd service" Feb 13 20:50:08.344363 containerd[1534]: time="2025-02-13T20:50:08.343760191Z" level=info msg="using legacy CRI server" Feb 13 20:50:08.344363 containerd[1534]: time="2025-02-13T20:50:08.343767798Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:50:08.344363 containerd[1534]: time="2025-02-13T20:50:08.343839142Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:50:08.344588 containerd[1534]: time="2025-02-13T20:50:08.344457822Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:50:08.345521 containerd[1534]: time="2025-02-13T20:50:08.344858945Z" level=info msg="Start subscribing containerd event" Feb 13 20:50:08.345521 containerd[1534]: time="2025-02-13T20:50:08.344913034Z" level=info msg="Start recovering state" Feb 13 20:50:08.345521 containerd[1534]: time="2025-02-13T20:50:08.344975250Z" level=info msg="Start event monitor" Feb 13 20:50:08.345521 containerd[1534]: time="2025-02-13T20:50:08.344985139Z" level=info msg="Start snapshots syncer" Feb 13 20:50:08.345521 containerd[1534]: time="2025-02-13T20:50:08.344994147Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:50:08.345521 containerd[1534]: time="2025-02-13T20:50:08.345001554Z" level=info msg="Start streaming server" Feb 13 20:50:08.345660 containerd[1534]: time="2025-02-13T20:50:08.345605060Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:50:08.345660 containerd[1534]: time="2025-02-13T20:50:08.345644656Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:50:08.347064 containerd[1534]: time="2025-02-13T20:50:08.345693941Z" level=info msg="containerd successfully booted in 0.045612s" Feb 13 20:50:08.345839 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:50:08.466603 tar[1530]: linux-arm64/LICENSE Feb 13 20:50:08.466603 tar[1530]: linux-arm64/README.md Feb 13 20:50:08.475649 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:50:08.508385 sshd_keygen[1526]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:50:08.526904 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:50:08.548755 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:50:08.553597 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:50:08.553835 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:50:08.556843 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:50:08.570340 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:50:08.572827 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:50:08.574678 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 20:50:08.575845 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:50:08.715030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:08.716274 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:50:08.719037 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:50:08.722652 systemd[1]: Startup finished in 5.632s (kernel) + 2.979s (userspace) = 8.612s. Feb 13 20:50:09.192400 kubelet[1636]: E0213 20:50:09.192343 1636 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:50:09.194828 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:50:09.195053 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:50:13.449338 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:50:13.460720 systemd[1]: Started sshd@0-10.0.0.7:22-10.0.0.1:33154.service - OpenSSH per-connection server daemon (10.0.0.1:33154). Feb 13 20:50:13.515543 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 33154 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:50:13.517209 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:13.525508 systemd-logind[1517]: New session 1 of user core. Feb 13 20:50:13.526465 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:50:13.537723 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:50:13.547797 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:50:13.550170 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:50:13.557152 (systemd)[1657]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:50:13.628688 systemd[1657]: Queued start job for default target default.target. Feb 13 20:50:13.629059 systemd[1657]: Created slice app.slice - User Application Slice. Feb 13 20:50:13.629085 systemd[1657]: Reached target paths.target - Paths. Feb 13 20:50:13.629096 systemd[1657]: Reached target timers.target - Timers. Feb 13 20:50:13.643603 systemd[1657]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:50:13.649637 systemd[1657]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:50:13.649706 systemd[1657]: Reached target sockets.target - Sockets. Feb 13 20:50:13.649719 systemd[1657]: Reached target basic.target - Basic System. Feb 13 20:50:13.649761 systemd[1657]: Reached target default.target - Main User Target. Feb 13 20:50:13.649786 systemd[1657]: Startup finished in 84ms. Feb 13 20:50:13.650095 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:50:13.651449 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:50:13.707226 systemd[1]: Started sshd@1-10.0.0.7:22-10.0.0.1:33156.service - OpenSSH per-connection server daemon (10.0.0.1:33156). Feb 13 20:50:13.740540 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 33156 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:50:13.742205 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:13.746449 systemd-logind[1517]: New session 2 of user core. Feb 13 20:50:13.757817 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:50:13.810352 sshd[1669]: pam_unix(sshd:session): session closed for user core Feb 13 20:50:13.822826 systemd[1]: Started sshd@2-10.0.0.7:22-10.0.0.1:33166.service - OpenSSH per-connection server daemon (10.0.0.1:33166). Feb 13 20:50:13.823290 systemd[1]: sshd@1-10.0.0.7:22-10.0.0.1:33156.service: Deactivated successfully. Feb 13 20:50:13.825241 systemd-logind[1517]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:50:13.825675 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:50:13.827824 systemd-logind[1517]: Removed session 2. Feb 13 20:50:13.855708 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 33166 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:50:13.856935 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:13.860852 systemd-logind[1517]: New session 3 of user core. Feb 13 20:50:13.868751 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:50:13.917762 sshd[1674]: pam_unix(sshd:session): session closed for user core Feb 13 20:50:13.928771 systemd[1]: Started sshd@3-10.0.0.7:22-10.0.0.1:33176.service - OpenSSH per-connection server daemon (10.0.0.1:33176). Feb 13 20:50:13.929162 systemd[1]: sshd@2-10.0.0.7:22-10.0.0.1:33166.service: Deactivated successfully. Feb 13 20:50:13.931723 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:50:13.932283 systemd-logind[1517]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:50:13.933204 systemd-logind[1517]: Removed session 3. Feb 13 20:50:13.960885 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 33176 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:50:13.962017 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:13.965792 systemd-logind[1517]: New session 4 of user core. Feb 13 20:50:13.977744 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:50:14.029796 sshd[1682]: pam_unix(sshd:session): session closed for user core Feb 13 20:50:14.038804 systemd[1]: Started sshd@4-10.0.0.7:22-10.0.0.1:33184.service - OpenSSH per-connection server daemon (10.0.0.1:33184). Feb 13 20:50:14.039239 systemd[1]: sshd@3-10.0.0.7:22-10.0.0.1:33176.service: Deactivated successfully. Feb 13 20:50:14.040698 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:50:14.041728 systemd-logind[1517]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:50:14.042953 systemd-logind[1517]: Removed session 4. Feb 13 20:50:14.071389 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 33184 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:50:14.072595 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:14.076509 systemd-logind[1517]: New session 5 of user core. Feb 13 20:50:14.092777 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:50:14.162115 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:50:14.162430 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:50:14.514733 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:50:14.514986 (dockerd)[1716]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:50:14.856712 dockerd[1716]: time="2025-02-13T20:50:14.856578625Z" level=info msg="Starting up" Feb 13 20:50:15.238174 dockerd[1716]: time="2025-02-13T20:50:15.238045630Z" level=info msg="Loading containers: start." Feb 13 20:50:15.332537 kernel: Initializing XFRM netlink socket Feb 13 20:50:15.398187 systemd-networkd[1230]: docker0: Link UP Feb 13 20:50:15.419817 dockerd[1716]: time="2025-02-13T20:50:15.419772363Z" level=info msg="Loading containers: done." Feb 13 20:50:15.433756 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck889660635-merged.mount: Deactivated successfully. Feb 13 20:50:15.434003 dockerd[1716]: time="2025-02-13T20:50:15.433940726Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:50:15.434067 dockerd[1716]: time="2025-02-13T20:50:15.434044705Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:50:15.434167 dockerd[1716]: time="2025-02-13T20:50:15.434147285Z" level=info msg="Daemon has completed initialization" Feb 13 20:50:15.465407 dockerd[1716]: time="2025-02-13T20:50:15.465225749Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:50:15.465669 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:50:16.116790 containerd[1534]: time="2025-02-13T20:50:16.116742878Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 20:50:16.755671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount559335481.mount: Deactivated successfully. Feb 13 20:50:18.007062 containerd[1534]: time="2025-02-13T20:50:18.006998239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:18.008258 containerd[1534]: time="2025-02-13T20:50:18.008216698Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 20:50:18.009119 containerd[1534]: time="2025-02-13T20:50:18.009056977Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:18.011856 containerd[1534]: time="2025-02-13T20:50:18.011810724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:18.013214 containerd[1534]: time="2025-02-13T20:50:18.013160084Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 1.896372863s" Feb 13 20:50:18.013214 containerd[1534]: time="2025-02-13T20:50:18.013202585Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 20:50:18.034141 containerd[1534]: time="2025-02-13T20:50:18.034032514Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 20:50:19.445295 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:50:19.460759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:19.553517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:19.557566 (kubelet)[1944]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:50:19.596091 kubelet[1944]: E0213 20:50:19.596005 1944 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:50:19.599032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:50:19.599212 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:50:20.268119 containerd[1534]: time="2025-02-13T20:50:20.268061327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:20.274643 containerd[1534]: time="2025-02-13T20:50:20.274603097Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 20:50:20.276073 containerd[1534]: time="2025-02-13T20:50:20.275742653Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:20.278560 containerd[1534]: time="2025-02-13T20:50:20.278511048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:20.280201 containerd[1534]: time="2025-02-13T20:50:20.280156575Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 2.246080321s" Feb 13 20:50:20.280201 containerd[1534]: time="2025-02-13T20:50:20.280195231Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 20:50:20.299747 containerd[1534]: time="2025-02-13T20:50:20.299712015Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 20:50:21.468403 containerd[1534]: time="2025-02-13T20:50:21.468332666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:21.468786 containerd[1534]: time="2025-02-13T20:50:21.468712375Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 20:50:21.469688 containerd[1534]: time="2025-02-13T20:50:21.469647981Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:21.472568 containerd[1534]: time="2025-02-13T20:50:21.472526667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:21.473698 containerd[1534]: time="2025-02-13T20:50:21.473653948Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.173902877s" Feb 13 20:50:21.473698 containerd[1534]: time="2025-02-13T20:50:21.473695644Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 20:50:21.492195 containerd[1534]: time="2025-02-13T20:50:21.492156266Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:50:22.697471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3649496610.mount: Deactivated successfully. Feb 13 20:50:22.895515 containerd[1534]: time="2025-02-13T20:50:22.895447229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:22.896241 containerd[1534]: time="2025-02-13T20:50:22.896188901Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 20:50:22.897715 containerd[1534]: time="2025-02-13T20:50:22.897675206Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:22.899758 containerd[1534]: time="2025-02-13T20:50:22.899726878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:22.900548 containerd[1534]: time="2025-02-13T20:50:22.900480835Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.408281793s" Feb 13 20:50:22.900548 containerd[1534]: time="2025-02-13T20:50:22.900531053Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 20:50:22.919122 containerd[1534]: time="2025-02-13T20:50:22.919084058Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:50:23.587519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3279566487.mount: Deactivated successfully. Feb 13 20:50:24.711517 containerd[1534]: time="2025-02-13T20:50:24.711235035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:24.712537 containerd[1534]: time="2025-02-13T20:50:24.712496401Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 20:50:24.713514 containerd[1534]: time="2025-02-13T20:50:24.713461753Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:24.716436 containerd[1534]: time="2025-02-13T20:50:24.716394178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:24.718202 containerd[1534]: time="2025-02-13T20:50:24.718165869Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.799039796s" Feb 13 20:50:24.718242 containerd[1534]: time="2025-02-13T20:50:24.718200400Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 20:50:24.736502 containerd[1534]: time="2025-02-13T20:50:24.736453885Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 20:50:25.291997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3256424120.mount: Deactivated successfully. Feb 13 20:50:25.296585 containerd[1534]: time="2025-02-13T20:50:25.296511769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:25.297354 containerd[1534]: time="2025-02-13T20:50:25.297314731Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 20:50:25.298593 containerd[1534]: time="2025-02-13T20:50:25.298544703Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:25.300981 containerd[1534]: time="2025-02-13T20:50:25.300939387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:25.301715 containerd[1534]: time="2025-02-13T20:50:25.301685812Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 565.198918ms" Feb 13 20:50:25.301756 containerd[1534]: time="2025-02-13T20:50:25.301720103Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 20:50:25.319210 containerd[1534]: time="2025-02-13T20:50:25.319169136Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 20:50:25.916824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2446862310.mount: Deactivated successfully. Feb 13 20:50:28.979322 containerd[1534]: time="2025-02-13T20:50:28.979002933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:28.980309 containerd[1534]: time="2025-02-13T20:50:28.980053755Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 20:50:28.981046 containerd[1534]: time="2025-02-13T20:50:28.981005992Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:28.984264 containerd[1534]: time="2025-02-13T20:50:28.984217632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:28.986513 containerd[1534]: time="2025-02-13T20:50:28.986458070Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.667250002s" Feb 13 20:50:28.986513 containerd[1534]: time="2025-02-13T20:50:28.986505682Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 20:50:29.849428 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:50:29.858638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:30.067395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:30.071526 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:50:30.108196 kubelet[2180]: E0213 20:50:30.108061 2180 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:50:30.110647 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:50:30.110817 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:50:34.586033 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:34.598736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:34.613747 systemd[1]: Reloading requested from client PID 2198 ('systemctl') (unit session-5.scope)... Feb 13 20:50:34.613763 systemd[1]: Reloading... Feb 13 20:50:34.672599 zram_generator::config[2240]: No configuration found. Feb 13 20:50:34.801943 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:50:34.850056 systemd[1]: Reloading finished in 235 ms. Feb 13 20:50:34.892375 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:50:34.892438 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:50:34.892685 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:34.894693 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:34.984819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:34.989209 (kubelet)[2295]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:50:35.024740 kubelet[2295]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:50:35.024740 kubelet[2295]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:50:35.024740 kubelet[2295]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:50:35.025130 kubelet[2295]: I0213 20:50:35.024831 2295 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:50:36.134770 kubelet[2295]: I0213 20:50:36.134729 2295 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:50:36.134770 kubelet[2295]: I0213 20:50:36.134758 2295 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:50:36.135174 kubelet[2295]: I0213 20:50:36.134952 2295 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:50:36.166480 kubelet[2295]: E0213 20:50:36.166444 2295 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:50:36.166705 kubelet[2295]: I0213 20:50:36.166502 2295 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:50:36.178539 kubelet[2295]: I0213 20:50:36.178505 2295 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:50:36.179789 kubelet[2295]: I0213 20:50:36.179736 2295 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:50:36.179941 kubelet[2295]: I0213 20:50:36.179779 2295 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:50:36.180024 kubelet[2295]: I0213 20:50:36.180008 2295 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:50:36.180024 kubelet[2295]: I0213 20:50:36.180017 2295 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:50:36.180291 kubelet[2295]: I0213 20:50:36.180267 2295 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:50:36.181092 kubelet[2295]: I0213 20:50:36.181071 2295 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:50:36.181092 kubelet[2295]: I0213 20:50:36.181091 2295 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:50:36.181395 kubelet[2295]: I0213 20:50:36.181385 2295 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:50:36.181604 kubelet[2295]: I0213 20:50:36.181589 2295 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:50:36.182095 kubelet[2295]: W0213 20:50:36.181857 2295 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:50:36.182095 kubelet[2295]: E0213 20:50:36.181913 2295 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:50:36.182095 kubelet[2295]: W0213 20:50:36.182029 2295 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:50:36.182095 kubelet[2295]: E0213 20:50:36.182069 2295 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:50:36.182656 kubelet[2295]: I0213 20:50:36.182638 2295 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:50:36.183005 kubelet[2295]: I0213 20:50:36.182981 2295 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:50:36.183039 kubelet[2295]: W0213 20:50:36.183023 2295 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:50:36.184364 kubelet[2295]: I0213 20:50:36.183770 2295 server.go:1264] "Started kubelet" Feb 13 20:50:36.184364 kubelet[2295]: I0213 20:50:36.183929 2295 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:50:36.185137 kubelet[2295]: I0213 20:50:36.185109 2295 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:50:36.189305 kubelet[2295]: I0213 20:50:36.187031 2295 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:50:36.189699 kubelet[2295]: I0213 20:50:36.189641 2295 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:50:36.189862 kubelet[2295]: I0213 20:50:36.189843 2295 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:50:36.190672 kubelet[2295]: E0213 20:50:36.190645 2295 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:50:36.190945 kubelet[2295]: I0213 20:50:36.190936 2295 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:50:36.191088 kubelet[2295]: I0213 20:50:36.191074 2295 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:50:36.193251 kubelet[2295]: E0213 20:50:36.189965 2295 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dfb1e1afd662 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:50:36.183746146 +0000 UTC m=+1.191363150,LastTimestamp:2025-02-13 20:50:36.183746146 +0000 UTC m=+1.191363150,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:50:36.193527 kubelet[2295]: W0213 20:50:36.193472 2295 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:50:36.193566 kubelet[2295]: E0213 20:50:36.193532 2295 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:50:36.193792 kubelet[2295]: I0213 20:50:36.193779 2295 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:50:36.194669 kubelet[2295]: E0213 20:50:36.194642 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="200ms" Feb 13 20:50:36.199903 kubelet[2295]: I0213 20:50:36.199311 2295 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:50:36.199903 kubelet[2295]: I0213 20:50:36.199390 2295 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:50:36.200654 kubelet[2295]: I0213 20:50:36.200631 2295 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:50:36.201835 kubelet[2295]: E0213 20:50:36.201814 2295 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:50:36.210368 kubelet[2295]: I0213 20:50:36.210335 2295 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:50:36.211309 kubelet[2295]: I0213 20:50:36.211289 2295 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:50:36.211537 kubelet[2295]: I0213 20:50:36.211526 2295 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:50:36.211607 kubelet[2295]: I0213 20:50:36.211598 2295 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:50:36.211701 kubelet[2295]: E0213 20:50:36.211686 2295 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:50:36.212281 kubelet[2295]: W0213 20:50:36.212243 2295 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:50:36.212383 kubelet[2295]: E0213 20:50:36.212369 2295 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:50:36.218663 kubelet[2295]: I0213 20:50:36.218625 2295 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:50:36.218663 kubelet[2295]: I0213 20:50:36.218646 2295 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:50:36.218663 kubelet[2295]: I0213 20:50:36.218665 2295 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:50:36.293172 kubelet[2295]: I0213 20:50:36.293128 2295 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:50:36.293584 kubelet[2295]: E0213 20:50:36.293536 2295 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Feb 13 20:50:36.311850 kubelet[2295]: E0213 20:50:36.311799 2295 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:50:36.327024 kubelet[2295]: I0213 20:50:36.326972 2295 policy_none.go:49] "None policy: Start" Feb 13 20:50:36.327627 kubelet[2295]: I0213 20:50:36.327607 2295 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:50:36.327687 kubelet[2295]: I0213 20:50:36.327638 2295 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:50:36.332011 kubelet[2295]: I0213 20:50:36.331345 2295 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:50:36.332011 kubelet[2295]: I0213 20:50:36.331531 2295 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:50:36.332011 kubelet[2295]: I0213 20:50:36.331623 2295 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:50:36.333132 kubelet[2295]: E0213 20:50:36.332858 2295 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 20:50:36.396544 kubelet[2295]: E0213 20:50:36.395540 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="400ms" Feb 13 20:50:36.494853 kubelet[2295]: I0213 20:50:36.494817 2295 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:50:36.495184 kubelet[2295]: E0213 20:50:36.495134 2295 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Feb 13 20:50:36.512379 kubelet[2295]: I0213 20:50:36.512298 2295 topology_manager.go:215] "Topology Admit Handler" podUID="6eb93ee6de7692767faad11320cd4af4" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 20:50:36.513347 kubelet[2295]: I0213 20:50:36.513317 2295 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 20:50:36.514103 kubelet[2295]: I0213 20:50:36.514051 2295 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 20:50:36.597317 kubelet[2295]: I0213 20:50:36.597280 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:36.597317 kubelet[2295]: I0213 20:50:36.597316 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:36.597446 kubelet[2295]: I0213 20:50:36.597336 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:36.597446 kubelet[2295]: I0213 20:50:36.597360 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:50:36.597446 kubelet[2295]: I0213 20:50:36.597387 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6eb93ee6de7692767faad11320cd4af4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6eb93ee6de7692767faad11320cd4af4\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:36.597446 kubelet[2295]: I0213 20:50:36.597403 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6eb93ee6de7692767faad11320cd4af4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6eb93ee6de7692767faad11320cd4af4\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:36.597446 kubelet[2295]: I0213 20:50:36.597419 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6eb93ee6de7692767faad11320cd4af4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6eb93ee6de7692767faad11320cd4af4\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:36.597573 kubelet[2295]: I0213 20:50:36.597434 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:36.597573 kubelet[2295]: I0213 20:50:36.597452 2295 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:36.796678 kubelet[2295]: E0213 20:50:36.796620 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="800ms" Feb 13 20:50:36.816951 kubelet[2295]: E0213 20:50:36.816924 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:36.817589 containerd[1534]: time="2025-02-13T20:50:36.817551424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6eb93ee6de7692767faad11320cd4af4,Namespace:kube-system,Attempt:0,}" Feb 13 20:50:36.818757 kubelet[2295]: E0213 20:50:36.818678 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:36.819106 kubelet[2295]: E0213 20:50:36.818893 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:36.819251 containerd[1534]: time="2025-02-13T20:50:36.819216671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 20:50:36.819722 containerd[1534]: time="2025-02-13T20:50:36.819480910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 20:50:36.896940 kubelet[2295]: I0213 20:50:36.896912 2295 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:50:36.897254 kubelet[2295]: E0213 20:50:36.897233 2295 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Feb 13 20:50:37.269737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1336204324.mount: Deactivated successfully. Feb 13 20:50:37.281596 containerd[1534]: time="2025-02-13T20:50:37.281517769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:50:37.283401 containerd[1534]: time="2025-02-13T20:50:37.283375388Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 20:50:37.284178 containerd[1534]: time="2025-02-13T20:50:37.284143695Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:50:37.284950 containerd[1534]: time="2025-02-13T20:50:37.284927604Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:50:37.285375 containerd[1534]: time="2025-02-13T20:50:37.285347903Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:50:37.286002 containerd[1534]: time="2025-02-13T20:50:37.285827450Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:50:37.286640 containerd[1534]: time="2025-02-13T20:50:37.286368085Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:50:37.292294 containerd[1534]: time="2025-02-13T20:50:37.291971506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:50:37.293531 containerd[1534]: time="2025-02-13T20:50:37.293347857Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 475.715982ms" Feb 13 20:50:37.294049 containerd[1534]: time="2025-02-13T20:50:37.294023152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 474.744791ms" Feb 13 20:50:37.298869 containerd[1534]: time="2025-02-13T20:50:37.298843303Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 479.256017ms" Feb 13 20:50:37.300209 kubelet[2295]: W0213 20:50:37.300153 2295 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:50:37.300517 kubelet[2295]: E0213 20:50:37.300216 2295 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:50:37.367305 kubelet[2295]: W0213 20:50:37.363395 2295 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:50:37.367305 kubelet[2295]: E0213 20:50:37.363439 2295 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:50:37.367704 kubelet[2295]: W0213 20:50:37.367595 2295 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:50:37.367704 kubelet[2295]: E0213 20:50:37.367646 2295 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:50:37.396747 kubelet[2295]: W0213 20:50:37.396663 2295 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:50:37.396747 kubelet[2295]: E0213 20:50:37.396724 2295 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:50:37.454933 containerd[1534]: time="2025-02-13T20:50:37.454724623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:50:37.454933 containerd[1534]: time="2025-02-13T20:50:37.454778270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:50:37.454933 containerd[1534]: time="2025-02-13T20:50:37.454789792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:37.454933 containerd[1534]: time="2025-02-13T20:50:37.454165505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:50:37.454933 containerd[1534]: time="2025-02-13T20:50:37.454819156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:50:37.454933 containerd[1534]: time="2025-02-13T20:50:37.454840759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:37.454933 containerd[1534]: time="2025-02-13T20:50:37.454887246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:37.454933 containerd[1534]: time="2025-02-13T20:50:37.454928691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:37.457678 containerd[1534]: time="2025-02-13T20:50:37.457573860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:50:37.457678 containerd[1534]: time="2025-02-13T20:50:37.457641789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:50:37.457678 containerd[1534]: time="2025-02-13T20:50:37.457653591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:37.457857 containerd[1534]: time="2025-02-13T20:50:37.457798811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:37.501561 containerd[1534]: time="2025-02-13T20:50:37.501467016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"5db80a2781a9109e95ba1a93582054d0fc3a261c959d72b8c48a5ba3e949e11d\"" Feb 13 20:50:37.503518 kubelet[2295]: E0213 20:50:37.502396 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:37.506830 containerd[1534]: time="2025-02-13T20:50:37.506787557Z" level=info msg="CreateContainer within sandbox \"5db80a2781a9109e95ba1a93582054d0fc3a261c959d72b8c48a5ba3e949e11d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:50:37.507911 containerd[1534]: time="2025-02-13T20:50:37.507882390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"c84b17ce868ec41ce3aec1af5025084db5a58ec8142311d3a0ab684b2d9fba68\"" Feb 13 20:50:37.508586 containerd[1534]: time="2025-02-13T20:50:37.508261282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6eb93ee6de7692767faad11320cd4af4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5308ac3cd11905c42b04089c8b5b2576e19b592860e40acd058ecbd4a9b0101\"" Feb 13 20:50:37.509112 kubelet[2295]: E0213 20:50:37.509083 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:37.509112 kubelet[2295]: E0213 20:50:37.509108 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:37.512018 containerd[1534]: time="2025-02-13T20:50:37.511926913Z" level=info msg="CreateContainer within sandbox \"f5308ac3cd11905c42b04089c8b5b2576e19b592860e40acd058ecbd4a9b0101\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:50:37.513322 containerd[1534]: time="2025-02-13T20:50:37.513292063Z" level=info msg="CreateContainer within sandbox \"c84b17ce868ec41ce3aec1af5025084db5a58ec8142311d3a0ab684b2d9fba68\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:50:37.526000 containerd[1534]: time="2025-02-13T20:50:37.525823089Z" level=info msg="CreateContainer within sandbox \"5db80a2781a9109e95ba1a93582054d0fc3a261c959d72b8c48a5ba3e949e11d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"96bf449e5bbc4484ac7d4d9d4af399c5414c339b165be4e88b21ac900b21e8d8\"" Feb 13 20:50:37.527510 containerd[1534]: time="2025-02-13T20:50:37.527350942Z" level=info msg="StartContainer for \"96bf449e5bbc4484ac7d4d9d4af399c5414c339b165be4e88b21ac900b21e8d8\"" Feb 13 20:50:37.529782 containerd[1534]: time="2025-02-13T20:50:37.529629540Z" level=info msg="CreateContainer within sandbox \"f5308ac3cd11905c42b04089c8b5b2576e19b592860e40acd058ecbd4a9b0101\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4118f400ebfccbe6d250b09f449965963395eda924c43006a523018fec259a1f\"" Feb 13 20:50:37.530200 containerd[1534]: time="2025-02-13T20:50:37.530170655Z" level=info msg="StartContainer for \"4118f400ebfccbe6d250b09f449965963395eda924c43006a523018fec259a1f\"" Feb 13 20:50:37.532905 containerd[1534]: time="2025-02-13T20:50:37.532850188Z" level=info msg="CreateContainer within sandbox \"c84b17ce868ec41ce3aec1af5025084db5a58ec8142311d3a0ab684b2d9fba68\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"88d789e7aeb28018986071539932a5b1b2da6008a1466a6cef2bf92374e8995e\"" Feb 13 20:50:37.533557 containerd[1534]: time="2025-02-13T20:50:37.533526843Z" level=info msg="StartContainer for \"88d789e7aeb28018986071539932a5b1b2da6008a1466a6cef2bf92374e8995e\"" Feb 13 20:50:37.597341 kubelet[2295]: E0213 20:50:37.597272 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="1.6s" Feb 13 20:50:37.617075 containerd[1534]: time="2025-02-13T20:50:37.617043439Z" level=info msg="StartContainer for \"4118f400ebfccbe6d250b09f449965963395eda924c43006a523018fec259a1f\" returns successfully" Feb 13 20:50:37.617584 containerd[1534]: time="2025-02-13T20:50:37.617561232Z" level=info msg="StartContainer for \"96bf449e5bbc4484ac7d4d9d4af399c5414c339b165be4e88b21ac900b21e8d8\" returns successfully" Feb 13 20:50:37.626923 containerd[1534]: time="2025-02-13T20:50:37.626894372Z" level=info msg="StartContainer for \"88d789e7aeb28018986071539932a5b1b2da6008a1466a6cef2bf92374e8995e\" returns successfully" Feb 13 20:50:37.699142 kubelet[2295]: I0213 20:50:37.699083 2295 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:50:37.699678 kubelet[2295]: E0213 20:50:37.699651 2295 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Feb 13 20:50:38.226159 kubelet[2295]: E0213 20:50:38.226110 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:38.229801 kubelet[2295]: E0213 20:50:38.229726 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:38.231124 kubelet[2295]: E0213 20:50:38.231102 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:39.233179 kubelet[2295]: E0213 20:50:39.233122 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:39.233674 kubelet[2295]: E0213 20:50:39.233645 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:39.304967 kubelet[2295]: I0213 20:50:39.304901 2295 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:50:39.508648 kubelet[2295]: E0213 20:50:39.508523 2295 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 20:50:39.600345 kubelet[2295]: I0213 20:50:39.600303 2295 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 20:50:39.609960 kubelet[2295]: E0213 20:50:39.609871 2295 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:50:39.710033 kubelet[2295]: E0213 20:50:39.709977 2295 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:50:39.810837 kubelet[2295]: E0213 20:50:39.810715 2295 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:50:39.911276 kubelet[2295]: E0213 20:50:39.911236 2295 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:50:40.011891 kubelet[2295]: E0213 20:50:40.011824 2295 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:50:40.112644 kubelet[2295]: E0213 20:50:40.112528 2295 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:50:40.212984 kubelet[2295]: E0213 20:50:40.212946 2295 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:50:40.234709 kubelet[2295]: E0213 20:50:40.234680 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:40.313551 kubelet[2295]: E0213 20:50:40.313512 2295 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:50:40.414280 kubelet[2295]: E0213 20:50:40.414147 2295 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:50:41.184606 kubelet[2295]: I0213 20:50:41.184573 2295 apiserver.go:52] "Watching apiserver" Feb 13 20:50:41.191739 kubelet[2295]: I0213 20:50:41.191686 2295 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:50:41.442534 systemd[1]: Reloading requested from client PID 2572 ('systemctl') (unit session-5.scope)... Feb 13 20:50:41.442548 systemd[1]: Reloading... Feb 13 20:50:41.498525 zram_generator::config[2612]: No configuration found. Feb 13 20:50:41.591701 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:50:41.648964 systemd[1]: Reloading finished in 206 ms. Feb 13 20:50:41.674361 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:41.691509 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:50:41.691833 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:41.699876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:41.787205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:41.791647 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:50:41.830994 kubelet[2663]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:50:41.831455 kubelet[2663]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:50:41.831455 kubelet[2663]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:50:41.831455 kubelet[2663]: I0213 20:50:41.831407 2663 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:50:41.836331 kubelet[2663]: I0213 20:50:41.836287 2663 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:50:41.836331 kubelet[2663]: I0213 20:50:41.836320 2663 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:50:41.836524 kubelet[2663]: I0213 20:50:41.836506 2663 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:50:41.838955 kubelet[2663]: I0213 20:50:41.837972 2663 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:50:41.839430 kubelet[2663]: I0213 20:50:41.839401 2663 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:50:41.845745 kubelet[2663]: I0213 20:50:41.845717 2663 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:50:41.846814 kubelet[2663]: I0213 20:50:41.846097 2663 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:50:41.846814 kubelet[2663]: I0213 20:50:41.846121 2663 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:50:41.846814 kubelet[2663]: I0213 20:50:41.846273 2663 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:50:41.846814 kubelet[2663]: I0213 20:50:41.846281 2663 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:50:41.846814 kubelet[2663]: I0213 20:50:41.846310 2663 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:50:41.847016 kubelet[2663]: I0213 20:50:41.846410 2663 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:50:41.847016 kubelet[2663]: I0213 20:50:41.846420 2663 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:50:41.847016 kubelet[2663]: I0213 20:50:41.846444 2663 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:50:41.847016 kubelet[2663]: I0213 20:50:41.846458 2663 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:50:41.847603 kubelet[2663]: I0213 20:50:41.847569 2663 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:50:41.848298 kubelet[2663]: I0213 20:50:41.847827 2663 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:50:41.848298 kubelet[2663]: I0213 20:50:41.848175 2663 server.go:1264] "Started kubelet" Feb 13 20:50:41.848793 kubelet[2663]: I0213 20:50:41.848738 2663 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:50:41.848991 kubelet[2663]: I0213 20:50:41.848967 2663 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:50:41.849023 kubelet[2663]: I0213 20:50:41.849011 2663 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:50:41.850514 kubelet[2663]: I0213 20:50:41.850459 2663 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:50:41.862126 kubelet[2663]: I0213 20:50:41.862103 2663 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:50:41.865321 kubelet[2663]: I0213 20:50:41.865297 2663 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:50:41.865985 kubelet[2663]: I0213 20:50:41.865866 2663 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:50:41.866032 kubelet[2663]: I0213 20:50:41.866023 2663 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:50:41.871722 kubelet[2663]: I0213 20:50:41.871686 2663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:50:41.872788 kubelet[2663]: I0213 20:50:41.872758 2663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:50:41.872901 kubelet[2663]: I0213 20:50:41.872889 2663 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:50:41.872966 kubelet[2663]: I0213 20:50:41.872957 2663 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:50:41.873062 kubelet[2663]: E0213 20:50:41.873044 2663 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:50:41.873762 kubelet[2663]: I0213 20:50:41.873724 2663 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:50:41.873980 kubelet[2663]: I0213 20:50:41.873824 2663 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:50:41.875316 kubelet[2663]: I0213 20:50:41.874981 2663 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:50:41.883746 kubelet[2663]: E0213 20:50:41.883714 2663 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:50:41.919564 kubelet[2663]: I0213 20:50:41.919531 2663 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:50:41.919564 kubelet[2663]: I0213 20:50:41.919551 2663 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:50:41.919564 kubelet[2663]: I0213 20:50:41.919573 2663 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:50:41.919747 kubelet[2663]: I0213 20:50:41.919726 2663 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:50:41.919774 kubelet[2663]: I0213 20:50:41.919742 2663 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:50:41.919774 kubelet[2663]: I0213 20:50:41.919761 2663 policy_none.go:49] "None policy: Start" Feb 13 20:50:41.920373 kubelet[2663]: I0213 20:50:41.920356 2663 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:50:41.920416 kubelet[2663]: I0213 20:50:41.920379 2663 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:50:41.920601 kubelet[2663]: I0213 20:50:41.920571 2663 state_mem.go:75] "Updated machine memory state" Feb 13 20:50:41.921921 kubelet[2663]: I0213 20:50:41.921753 2663 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:50:41.921992 kubelet[2663]: I0213 20:50:41.921932 2663 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:50:41.922183 kubelet[2663]: I0213 20:50:41.922034 2663 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:50:41.969995 kubelet[2663]: I0213 20:50:41.969899 2663 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:50:41.974048 kubelet[2663]: I0213 20:50:41.973999 2663 topology_manager.go:215] "Topology Admit Handler" podUID="6eb93ee6de7692767faad11320cd4af4" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 20:50:41.974158 kubelet[2663]: I0213 20:50:41.974119 2663 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 20:50:41.974183 kubelet[2663]: I0213 20:50:41.974157 2663 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 20:50:42.113032 kubelet[2663]: I0213 20:50:42.112937 2663 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 20:50:42.113032 kubelet[2663]: I0213 20:50:42.113059 2663 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 20:50:42.167368 kubelet[2663]: I0213 20:50:42.167326 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:42.167368 kubelet[2663]: I0213 20:50:42.167368 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:50:42.167610 kubelet[2663]: I0213 20:50:42.167394 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6eb93ee6de7692767faad11320cd4af4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6eb93ee6de7692767faad11320cd4af4\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:42.167610 kubelet[2663]: I0213 20:50:42.167411 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6eb93ee6de7692767faad11320cd4af4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6eb93ee6de7692767faad11320cd4af4\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:42.167610 kubelet[2663]: I0213 20:50:42.167431 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6eb93ee6de7692767faad11320cd4af4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6eb93ee6de7692767faad11320cd4af4\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:42.167610 kubelet[2663]: I0213 20:50:42.167449 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:42.167610 kubelet[2663]: I0213 20:50:42.167466 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:42.167716 kubelet[2663]: I0213 20:50:42.167482 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:42.167716 kubelet[2663]: I0213 20:50:42.167518 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:42.412653 kubelet[2663]: E0213 20:50:42.412616 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:42.413122 kubelet[2663]: E0213 20:50:42.413099 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:42.414789 kubelet[2663]: E0213 20:50:42.414760 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:42.847495 kubelet[2663]: I0213 20:50:42.847457 2663 apiserver.go:52] "Watching apiserver" Feb 13 20:50:42.866273 kubelet[2663]: I0213 20:50:42.866234 2663 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:50:42.894258 kubelet[2663]: E0213 20:50:42.894041 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:42.894885 kubelet[2663]: E0213 20:50:42.894535 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:42.897766 kubelet[2663]: E0213 20:50:42.897709 2663 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:42.898204 kubelet[2663]: E0213 20:50:42.898164 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:42.915024 kubelet[2663]: I0213 20:50:42.914954 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.914938229 podStartE2EDuration="914.938229ms" podCreationTimestamp="2025-02-13 20:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:50:42.914663082 +0000 UTC m=+1.119900401" watchObservedRunningTime="2025-02-13 20:50:42.914938229 +0000 UTC m=+1.120175509" Feb 13 20:50:42.927161 kubelet[2663]: I0213 20:50:42.927106 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.927083615 podStartE2EDuration="927.083615ms" podCreationTimestamp="2025-02-13 20:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:50:42.925718837 +0000 UTC m=+1.130956116" watchObservedRunningTime="2025-02-13 20:50:42.927083615 +0000 UTC m=+1.132320894" Feb 13 20:50:42.933810 kubelet[2663]: I0213 20:50:42.933746 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.933735086 podStartE2EDuration="933.735086ms" podCreationTimestamp="2025-02-13 20:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:50:42.933725885 +0000 UTC m=+1.138963124" watchObservedRunningTime="2025-02-13 20:50:42.933735086 +0000 UTC m=+1.138972365" Feb 13 20:50:43.240860 sudo[1697]: pam_unix(sudo:session): session closed for user root Feb 13 20:50:43.244390 sshd[1690]: pam_unix(sshd:session): session closed for user core Feb 13 20:50:43.247387 systemd[1]: sshd@4-10.0.0.7:22-10.0.0.1:33184.service: Deactivated successfully. Feb 13 20:50:43.250682 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:50:43.252986 systemd-logind[1517]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:50:43.253841 systemd-logind[1517]: Removed session 5. Feb 13 20:50:43.892726 kubelet[2663]: E0213 20:50:43.892693 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:44.893965 kubelet[2663]: E0213 20:50:44.893912 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:48.196042 kubelet[2663]: E0213 20:50:48.196000 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:48.902404 kubelet[2663]: E0213 20:50:48.902369 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:49.530130 kubelet[2663]: E0213 20:50:49.530094 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:49.818311 kubelet[2663]: E0213 20:50:49.818179 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:49.903558 kubelet[2663]: E0213 20:50:49.903148 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:49.903558 kubelet[2663]: E0213 20:50:49.903428 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:50.904400 kubelet[2663]: E0213 20:50:50.904354 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:53.110142 update_engine[1518]: I20250213 20:50:53.110050 1518 update_attempter.cc:509] Updating boot flags... Feb 13 20:50:53.135029 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2735) Feb 13 20:50:53.168556 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2732) Feb 13 20:50:56.121181 kubelet[2663]: I0213 20:50:56.121141 2663 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:50:56.121637 containerd[1534]: time="2025-02-13T20:50:56.121446640Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:50:56.121856 kubelet[2663]: I0213 20:50:56.121720 2663 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:50:56.709327 kubelet[2663]: I0213 20:50:56.709287 2663 topology_manager.go:215] "Topology Admit Handler" podUID="e954a88b-152e-4aaa-8dd3-9688e225e566" podNamespace="kube-system" podName="kube-proxy-f976b" Feb 13 20:50:56.710025 kubelet[2663]: I0213 20:50:56.709603 2663 topology_manager.go:215] "Topology Admit Handler" podUID="cd690012-36a5-4d95-b540-563eafe34300" podNamespace="kube-flannel" podName="kube-flannel-ds-6mvg6" Feb 13 20:50:56.771573 kubelet[2663]: I0213 20:50:56.771523 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e954a88b-152e-4aaa-8dd3-9688e225e566-kube-proxy\") pod \"kube-proxy-f976b\" (UID: \"e954a88b-152e-4aaa-8dd3-9688e225e566\") " pod="kube-system/kube-proxy-f976b" Feb 13 20:50:56.771701 kubelet[2663]: I0213 20:50:56.771594 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e954a88b-152e-4aaa-8dd3-9688e225e566-xtables-lock\") pod \"kube-proxy-f976b\" (UID: \"e954a88b-152e-4aaa-8dd3-9688e225e566\") " pod="kube-system/kube-proxy-f976b" Feb 13 20:50:56.771701 kubelet[2663]: I0213 20:50:56.771623 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s66fc\" (UniqueName: \"kubernetes.io/projected/e954a88b-152e-4aaa-8dd3-9688e225e566-kube-api-access-s66fc\") pod \"kube-proxy-f976b\" (UID: \"e954a88b-152e-4aaa-8dd3-9688e225e566\") " pod="kube-system/kube-proxy-f976b" Feb 13 20:50:56.771701 kubelet[2663]: I0213 20:50:56.771652 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e954a88b-152e-4aaa-8dd3-9688e225e566-lib-modules\") pod \"kube-proxy-f976b\" (UID: \"e954a88b-152e-4aaa-8dd3-9688e225e566\") " pod="kube-system/kube-proxy-f976b" Feb 13 20:50:56.771701 kubelet[2663]: I0213 20:50:56.771674 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cd690012-36a5-4d95-b540-563eafe34300-run\") pod \"kube-flannel-ds-6mvg6\" (UID: \"cd690012-36a5-4d95-b540-563eafe34300\") " pod="kube-flannel/kube-flannel-ds-6mvg6" Feb 13 20:50:56.771823 kubelet[2663]: I0213 20:50:56.771709 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfhf4\" (UniqueName: \"kubernetes.io/projected/cd690012-36a5-4d95-b540-563eafe34300-kube-api-access-zfhf4\") pod \"kube-flannel-ds-6mvg6\" (UID: \"cd690012-36a5-4d95-b540-563eafe34300\") " pod="kube-flannel/kube-flannel-ds-6mvg6" Feb 13 20:50:56.771823 kubelet[2663]: I0213 20:50:56.771769 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/cd690012-36a5-4d95-b540-563eafe34300-flannel-cfg\") pod \"kube-flannel-ds-6mvg6\" (UID: \"cd690012-36a5-4d95-b540-563eafe34300\") " pod="kube-flannel/kube-flannel-ds-6mvg6" Feb 13 20:50:56.771823 kubelet[2663]: I0213 20:50:56.771798 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/cd690012-36a5-4d95-b540-563eafe34300-cni-plugin\") pod \"kube-flannel-ds-6mvg6\" (UID: \"cd690012-36a5-4d95-b540-563eafe34300\") " pod="kube-flannel/kube-flannel-ds-6mvg6" Feb 13 20:50:56.771823 kubelet[2663]: I0213 20:50:56.771817 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/cd690012-36a5-4d95-b540-563eafe34300-cni\") pod \"kube-flannel-ds-6mvg6\" (UID: \"cd690012-36a5-4d95-b540-563eafe34300\") " pod="kube-flannel/kube-flannel-ds-6mvg6" Feb 13 20:50:56.771906 kubelet[2663]: I0213 20:50:56.771847 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd690012-36a5-4d95-b540-563eafe34300-xtables-lock\") pod \"kube-flannel-ds-6mvg6\" (UID: \"cd690012-36a5-4d95-b540-563eafe34300\") " pod="kube-flannel/kube-flannel-ds-6mvg6" Feb 13 20:50:56.880743 kubelet[2663]: E0213 20:50:56.880646 2663 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 20:50:56.880743 kubelet[2663]: E0213 20:50:56.880701 2663 projected.go:200] Error preparing data for projected volume kube-api-access-s66fc for pod kube-system/kube-proxy-f976b: configmap "kube-root-ca.crt" not found Feb 13 20:50:56.880886 kubelet[2663]: E0213 20:50:56.880759 2663 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e954a88b-152e-4aaa-8dd3-9688e225e566-kube-api-access-s66fc podName:e954a88b-152e-4aaa-8dd3-9688e225e566 nodeName:}" failed. No retries permitted until 2025-02-13 20:50:57.380737403 +0000 UTC m=+15.585974682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s66fc" (UniqueName: "kubernetes.io/projected/e954a88b-152e-4aaa-8dd3-9688e225e566-kube-api-access-s66fc") pod "kube-proxy-f976b" (UID: "e954a88b-152e-4aaa-8dd3-9688e225e566") : configmap "kube-root-ca.crt" not found Feb 13 20:50:56.880963 kubelet[2663]: E0213 20:50:56.880656 2663 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 20:50:56.880963 kubelet[2663]: E0213 20:50:56.880948 2663 projected.go:200] Error preparing data for projected volume kube-api-access-zfhf4 for pod kube-flannel/kube-flannel-ds-6mvg6: configmap "kube-root-ca.crt" not found Feb 13 20:50:56.881016 kubelet[2663]: E0213 20:50:56.880980 2663 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd690012-36a5-4d95-b540-563eafe34300-kube-api-access-zfhf4 podName:cd690012-36a5-4d95-b540-563eafe34300 nodeName:}" failed. No retries permitted until 2025-02-13 20:50:57.380970412 +0000 UTC m=+15.586207691 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zfhf4" (UniqueName: "kubernetes.io/projected/cd690012-36a5-4d95-b540-563eafe34300-kube-api-access-zfhf4") pod "kube-flannel-ds-6mvg6" (UID: "cd690012-36a5-4d95-b540-563eafe34300") : configmap "kube-root-ca.crt" not found Feb 13 20:50:57.613536 kubelet[2663]: E0213 20:50:57.613449 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:57.614089 containerd[1534]: time="2025-02-13T20:50:57.614052939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6mvg6,Uid:cd690012-36a5-4d95-b540-563eafe34300,Namespace:kube-flannel,Attempt:0,}" Feb 13 20:50:57.619145 kubelet[2663]: E0213 20:50:57.619111 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:57.619762 containerd[1534]: time="2025-02-13T20:50:57.619551029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f976b,Uid:e954a88b-152e-4aaa-8dd3-9688e225e566,Namespace:kube-system,Attempt:0,}" Feb 13 20:50:57.641203 containerd[1534]: time="2025-02-13T20:50:57.641078574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:50:57.641203 containerd[1534]: time="2025-02-13T20:50:57.641187939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:50:57.641368 containerd[1534]: time="2025-02-13T20:50:57.641208219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:57.641952 containerd[1534]: time="2025-02-13T20:50:57.641804842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:57.642079 containerd[1534]: time="2025-02-13T20:50:57.642018210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:50:57.642130 containerd[1534]: time="2025-02-13T20:50:57.642067932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:50:57.642130 containerd[1534]: time="2025-02-13T20:50:57.642088813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:57.642217 containerd[1534]: time="2025-02-13T20:50:57.642170736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:57.686927 containerd[1534]: time="2025-02-13T20:50:57.686876850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f976b,Uid:e954a88b-152e-4aaa-8dd3-9688e225e566,Namespace:kube-system,Attempt:0,} returns sandbox id \"db0ebdde163bd3d023f1726de5b37ce0a72f1b75e864f5f0e776bdee578a9aff\"" Feb 13 20:50:57.687567 kubelet[2663]: E0213 20:50:57.687542 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:57.690814 containerd[1534]: time="2025-02-13T20:50:57.690768599Z" level=info msg="CreateContainer within sandbox \"db0ebdde163bd3d023f1726de5b37ce0a72f1b75e864f5f0e776bdee578a9aff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:50:57.694233 containerd[1534]: time="2025-02-13T20:50:57.694198490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6mvg6,Uid:cd690012-36a5-4d95-b540-563eafe34300,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"889affa0127b1df2f229785344f403fd499b7e82cac42b5ad8eb6fc0848028e3\"" Feb 13 20:50:57.694927 kubelet[2663]: E0213 20:50:57.694873 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:57.696357 containerd[1534]: time="2025-02-13T20:50:57.696326052Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:50:57.706601 containerd[1534]: time="2025-02-13T20:50:57.706555044Z" level=info msg="CreateContainer within sandbox \"db0ebdde163bd3d023f1726de5b37ce0a72f1b75e864f5f0e776bdee578a9aff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bd0ee953328500ecd86c326b198aafc932c5e7061dccdbe6e9beb046fe93e385\"" Feb 13 20:50:57.707519 containerd[1534]: time="2025-02-13T20:50:57.707078904Z" level=info msg="StartContainer for \"bd0ee953328500ecd86c326b198aafc932c5e7061dccdbe6e9beb046fe93e385\"" Feb 13 20:50:57.760541 containerd[1534]: time="2025-02-13T20:50:57.760481911Z" level=info msg="StartContainer for \"bd0ee953328500ecd86c326b198aafc932c5e7061dccdbe6e9beb046fe93e385\" returns successfully" Feb 13 20:50:57.918298 kubelet[2663]: E0213 20:50:57.918165 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:58.860065 containerd[1534]: time="2025-02-13T20:50:58.859987278Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:50:58.860065 containerd[1534]: time="2025-02-13T20:50:58.860044840Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11109" Feb 13 20:50:58.860612 kubelet[2663]: E0213 20:50:58.860245 2663 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:50:58.860612 kubelet[2663]: E0213 20:50:58.860323 2663 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:50:58.860902 kubelet[2663]: E0213 20:50:58.860544 2663 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfhf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-6mvg6_kube-flannel(cd690012-36a5-4d95-b540-563eafe34300): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:50:58.860966 kubelet[2663]: E0213 20:50:58.860575 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:50:58.924016 kubelet[2663]: E0213 20:50:58.923933 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:58.925881 kubelet[2663]: E0213 20:50:58.925813 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:50:58.933841 kubelet[2663]: I0213 20:50:58.933612 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f976b" podStartSLOduration=2.9335958030000002 podStartE2EDuration="2.933595803s" podCreationTimestamp="2025-02-13 20:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:50:57.927565195 +0000 UTC m=+16.132802474" watchObservedRunningTime="2025-02-13 20:50:58.933595803 +0000 UTC m=+17.138833082" Feb 13 20:51:08.084749 systemd[1]: Started sshd@5-10.0.0.7:22-10.0.0.1:40128.service - OpenSSH per-connection server daemon (10.0.0.1:40128). Feb 13 20:51:08.119411 sshd[2982]: Accepted publickey for core from 10.0.0.1 port 40128 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:08.120743 sshd[2982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:08.124692 systemd-logind[1517]: New session 6 of user core. Feb 13 20:51:08.139738 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:51:08.252939 sshd[2982]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:08.256282 systemd[1]: sshd@5-10.0.0.7:22-10.0.0.1:40128.service: Deactivated successfully. Feb 13 20:51:08.258638 systemd-logind[1517]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:51:08.258695 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:51:08.260399 systemd-logind[1517]: Removed session 6. Feb 13 20:51:13.271724 systemd[1]: Started sshd@6-10.0.0.7:22-10.0.0.1:35246.service - OpenSSH per-connection server daemon (10.0.0.1:35246). Feb 13 20:51:13.304595 sshd[2999]: Accepted publickey for core from 10.0.0.1 port 35246 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:13.305829 sshd[2999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:13.309227 systemd-logind[1517]: New session 7 of user core. Feb 13 20:51:13.320721 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:51:13.433142 sshd[2999]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:13.435840 systemd-logind[1517]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:51:13.437521 systemd[1]: sshd@6-10.0.0.7:22-10.0.0.1:35246.service: Deactivated successfully. Feb 13 20:51:13.439611 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:51:13.441451 systemd-logind[1517]: Removed session 7. Feb 13 20:51:14.874348 kubelet[2663]: E0213 20:51:14.874280 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:14.875675 containerd[1534]: time="2025-02-13T20:51:14.875246110Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:51:15.985162 containerd[1534]: time="2025-02-13T20:51:15.985109532Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:51:15.985703 containerd[1534]: time="2025-02-13T20:51:15.985169453Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:51:15.985742 kubelet[2663]: E0213 20:51:15.985295 2663 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:51:15.985742 kubelet[2663]: E0213 20:51:15.985339 2663 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:51:15.985994 kubelet[2663]: E0213 20:51:15.985424 2663 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfhf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-6mvg6_kube-flannel(cd690012-36a5-4d95-b540-563eafe34300): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:51:15.986052 kubelet[2663]: E0213 20:51:15.985451 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:51:18.447701 systemd[1]: Started sshd@7-10.0.0.7:22-10.0.0.1:35254.service - OpenSSH per-connection server daemon (10.0.0.1:35254). Feb 13 20:51:18.480478 sshd[3015]: Accepted publickey for core from 10.0.0.1 port 35254 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:18.481743 sshd[3015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:18.485578 systemd-logind[1517]: New session 8 of user core. Feb 13 20:51:18.492738 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:51:18.597935 sshd[3015]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:18.601323 systemd[1]: sshd@7-10.0.0.7:22-10.0.0.1:35254.service: Deactivated successfully. Feb 13 20:51:18.603336 systemd-logind[1517]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:51:18.603608 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:51:18.604944 systemd-logind[1517]: Removed session 8. Feb 13 20:51:23.615710 systemd[1]: Started sshd@8-10.0.0.7:22-10.0.0.1:49516.service - OpenSSH per-connection server daemon (10.0.0.1:49516). Feb 13 20:51:23.647840 sshd[3032]: Accepted publickey for core from 10.0.0.1 port 49516 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:23.649055 sshd[3032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:23.652841 systemd-logind[1517]: New session 9 of user core. Feb 13 20:51:23.661733 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:51:23.767713 sshd[3032]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:23.771104 systemd[1]: sshd@8-10.0.0.7:22-10.0.0.1:49516.service: Deactivated successfully. Feb 13 20:51:23.772889 systemd-logind[1517]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:51:23.772972 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:51:23.774233 systemd-logind[1517]: Removed session 9. Feb 13 20:51:28.777723 systemd[1]: Started sshd@9-10.0.0.7:22-10.0.0.1:49528.service - OpenSSH per-connection server daemon (10.0.0.1:49528). Feb 13 20:51:28.811739 sshd[3050]: Accepted publickey for core from 10.0.0.1 port 49528 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:28.812886 sshd[3050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:28.817026 systemd-logind[1517]: New session 10 of user core. Feb 13 20:51:28.828884 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:51:28.874125 kubelet[2663]: E0213 20:51:28.874051 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:28.875692 kubelet[2663]: E0213 20:51:28.875572 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:51:28.937037 sshd[3050]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:28.940447 systemd[1]: sshd@9-10.0.0.7:22-10.0.0.1:49528.service: Deactivated successfully. Feb 13 20:51:28.943343 systemd-logind[1517]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:51:28.944714 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:51:28.945924 systemd-logind[1517]: Removed session 10. Feb 13 20:51:33.951722 systemd[1]: Started sshd@10-10.0.0.7:22-10.0.0.1:41470.service - OpenSSH per-connection server daemon (10.0.0.1:41470). Feb 13 20:51:33.983917 sshd[3066]: Accepted publickey for core from 10.0.0.1 port 41470 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:33.985206 sshd[3066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:33.988797 systemd-logind[1517]: New session 11 of user core. Feb 13 20:51:33.998806 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:51:34.105295 sshd[3066]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:34.108642 systemd[1]: sshd@10-10.0.0.7:22-10.0.0.1:41470.service: Deactivated successfully. Feb 13 20:51:34.110525 systemd-logind[1517]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:51:34.110585 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:51:34.111408 systemd-logind[1517]: Removed session 11. Feb 13 20:51:39.114739 systemd[1]: Started sshd@11-10.0.0.7:22-10.0.0.1:41474.service - OpenSSH per-connection server daemon (10.0.0.1:41474). Feb 13 20:51:39.146684 sshd[3082]: Accepted publickey for core from 10.0.0.1 port 41474 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:39.147895 sshd[3082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:39.151837 systemd-logind[1517]: New session 12 of user core. Feb 13 20:51:39.163791 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:51:39.267162 sshd[3082]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:39.269591 systemd[1]: sshd@11-10.0.0.7:22-10.0.0.1:41474.service: Deactivated successfully. Feb 13 20:51:39.272308 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:51:39.272439 systemd-logind[1517]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:51:39.273978 systemd-logind[1517]: Removed session 12. Feb 13 20:51:42.873676 kubelet[2663]: E0213 20:51:42.873632 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:42.874837 containerd[1534]: time="2025-02-13T20:51:42.874740477Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:51:43.990174 containerd[1534]: time="2025-02-13T20:51:43.990125486Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:51:43.990579 containerd[1534]: time="2025-02-13T20:51:43.990210327Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11109" Feb 13 20:51:43.990615 kubelet[2663]: E0213 20:51:43.990308 2663 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:51:43.990615 kubelet[2663]: E0213 20:51:43.990360 2663 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:51:43.991882 kubelet[2663]: E0213 20:51:43.990437 2663 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfhf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-6mvg6_kube-flannel(cd690012-36a5-4d95-b540-563eafe34300): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:51:43.991959 kubelet[2663]: E0213 20:51:43.990465 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:51:44.280084 systemd[1]: Started sshd@12-10.0.0.7:22-10.0.0.1:43794.service - OpenSSH per-connection server daemon (10.0.0.1:43794). Feb 13 20:51:44.311871 sshd[3101]: Accepted publickey for core from 10.0.0.1 port 43794 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:44.312986 sshd[3101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:44.317620 systemd-logind[1517]: New session 13 of user core. Feb 13 20:51:44.328814 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:51:44.435685 sshd[3101]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:44.438840 systemd[1]: sshd@12-10.0.0.7:22-10.0.0.1:43794.service: Deactivated successfully. Feb 13 20:51:44.441685 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:51:44.441694 systemd-logind[1517]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:51:44.442617 systemd-logind[1517]: Removed session 13. Feb 13 20:51:49.446711 systemd[1]: Started sshd@13-10.0.0.7:22-10.0.0.1:43796.service - OpenSSH per-connection server daemon (10.0.0.1:43796). Feb 13 20:51:49.478841 sshd[3117]: Accepted publickey for core from 10.0.0.1 port 43796 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:49.480081 sshd[3117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:49.483713 systemd-logind[1517]: New session 14 of user core. Feb 13 20:51:49.489791 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:51:49.594682 sshd[3117]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:49.597685 systemd[1]: sshd@13-10.0.0.7:22-10.0.0.1:43796.service: Deactivated successfully. Feb 13 20:51:49.599398 systemd-logind[1517]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:51:49.599455 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:51:49.600515 systemd-logind[1517]: Removed session 14. Feb 13 20:51:51.874227 kubelet[2663]: E0213 20:51:51.874135 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:54.604712 systemd[1]: Started sshd@14-10.0.0.7:22-10.0.0.1:35198.service - OpenSSH per-connection server daemon (10.0.0.1:35198). Feb 13 20:51:54.637418 sshd[3133]: Accepted publickey for core from 10.0.0.1 port 35198 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:54.638544 sshd[3133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:54.642128 systemd-logind[1517]: New session 15 of user core. Feb 13 20:51:54.651735 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:51:54.765247 sshd[3133]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:54.769030 systemd[1]: sshd@14-10.0.0.7:22-10.0.0.1:35198.service: Deactivated successfully. Feb 13 20:51:54.772137 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:51:54.772866 systemd-logind[1517]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:51:54.773642 systemd-logind[1517]: Removed session 15. Feb 13 20:51:54.874563 kubelet[2663]: E0213 20:51:54.874401 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:58.874181 kubelet[2663]: E0213 20:51:58.874133 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:58.874955 kubelet[2663]: E0213 20:51:58.874878 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:51:59.780708 systemd[1]: Started sshd@15-10.0.0.7:22-10.0.0.1:35202.service - OpenSSH per-connection server daemon (10.0.0.1:35202). Feb 13 20:51:59.814045 sshd[3151]: Accepted publickey for core from 10.0.0.1 port 35202 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:59.815177 sshd[3151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:59.819032 systemd-logind[1517]: New session 16 of user core. Feb 13 20:51:59.832290 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:51:59.936051 sshd[3151]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:59.938942 systemd[1]: sshd@15-10.0.0.7:22-10.0.0.1:35202.service: Deactivated successfully. Feb 13 20:51:59.941320 systemd-logind[1517]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:51:59.941906 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:51:59.943133 systemd-logind[1517]: Removed session 16. Feb 13 20:52:04.950725 systemd[1]: Started sshd@16-10.0.0.7:22-10.0.0.1:52002.service - OpenSSH per-connection server daemon (10.0.0.1:52002). Feb 13 20:52:04.982983 sshd[3167]: Accepted publickey for core from 10.0.0.1 port 52002 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:04.984178 sshd[3167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:04.987599 systemd-logind[1517]: New session 17 of user core. Feb 13 20:52:04.994702 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:52:05.099511 sshd[3167]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:05.102659 systemd[1]: sshd@16-10.0.0.7:22-10.0.0.1:52002.service: Deactivated successfully. Feb 13 20:52:05.105203 systemd-logind[1517]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:52:05.105302 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:52:05.106323 systemd-logind[1517]: Removed session 17. Feb 13 20:52:09.874563 kubelet[2663]: E0213 20:52:09.874477 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:09.876897 kubelet[2663]: E0213 20:52:09.876206 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:52:10.110724 systemd[1]: Started sshd@17-10.0.0.7:22-10.0.0.1:52008.service - OpenSSH per-connection server daemon (10.0.0.1:52008). Feb 13 20:52:10.142934 sshd[3184]: Accepted publickey for core from 10.0.0.1 port 52008 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:10.144137 sshd[3184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:10.147569 systemd-logind[1517]: New session 18 of user core. Feb 13 20:52:10.153728 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:52:10.257444 sshd[3184]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:10.260172 systemd[1]: sshd@17-10.0.0.7:22-10.0.0.1:52008.service: Deactivated successfully. Feb 13 20:52:10.262820 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:52:10.263356 systemd-logind[1517]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:52:10.265214 systemd-logind[1517]: Removed session 18. Feb 13 20:52:10.874278 kubelet[2663]: E0213 20:52:10.874188 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:15.273707 systemd[1]: Started sshd@18-10.0.0.7:22-10.0.0.1:48700.service - OpenSSH per-connection server daemon (10.0.0.1:48700). Feb 13 20:52:15.305397 sshd[3201]: Accepted publickey for core from 10.0.0.1 port 48700 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:15.306514 sshd[3201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:15.309862 systemd-logind[1517]: New session 19 of user core. Feb 13 20:52:15.320699 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:52:15.423161 sshd[3201]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:15.426062 systemd[1]: sshd@18-10.0.0.7:22-10.0.0.1:48700.service: Deactivated successfully. Feb 13 20:52:15.427941 systemd-logind[1517]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:52:15.428020 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:52:15.429646 systemd-logind[1517]: Removed session 19. Feb 13 20:52:15.874849 kubelet[2663]: E0213 20:52:15.874463 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:20.436710 systemd[1]: Started sshd@19-10.0.0.7:22-10.0.0.1:48704.service - OpenSSH per-connection server daemon (10.0.0.1:48704). Feb 13 20:52:20.471780 sshd[3217]: Accepted publickey for core from 10.0.0.1 port 48704 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:20.472909 sshd[3217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:20.476396 systemd-logind[1517]: New session 20 of user core. Feb 13 20:52:20.486710 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:52:20.593871 sshd[3217]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:20.596757 systemd[1]: sshd@19-10.0.0.7:22-10.0.0.1:48704.service: Deactivated successfully. Feb 13 20:52:20.598660 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:52:20.599229 systemd-logind[1517]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:52:20.600098 systemd-logind[1517]: Removed session 20. Feb 13 20:52:20.873559 kubelet[2663]: E0213 20:52:20.873477 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:20.874037 kubelet[2663]: E0213 20:52:20.874001 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:52:25.608700 systemd[1]: Started sshd@20-10.0.0.7:22-10.0.0.1:60952.service - OpenSSH per-connection server daemon (10.0.0.1:60952). Feb 13 20:52:25.640694 sshd[3233]: Accepted publickey for core from 10.0.0.1 port 60952 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:25.641832 sshd[3233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:25.645544 systemd-logind[1517]: New session 21 of user core. Feb 13 20:52:25.655715 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:52:25.758850 sshd[3233]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:25.762238 systemd[1]: sshd@20-10.0.0.7:22-10.0.0.1:60952.service: Deactivated successfully. Feb 13 20:52:25.763975 systemd-logind[1517]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:52:25.764043 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:52:25.765042 systemd-logind[1517]: Removed session 21. Feb 13 20:52:30.776756 systemd[1]: Started sshd@21-10.0.0.7:22-10.0.0.1:60958.service - OpenSSH per-connection server daemon (10.0.0.1:60958). Feb 13 20:52:30.809566 sshd[3251]: Accepted publickey for core from 10.0.0.1 port 60958 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:30.810780 sshd[3251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:30.814921 systemd-logind[1517]: New session 22 of user core. Feb 13 20:52:30.822831 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:52:30.933046 sshd[3251]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:30.938721 systemd[1]: sshd@21-10.0.0.7:22-10.0.0.1:60958.service: Deactivated successfully. Feb 13 20:52:30.940975 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:52:30.941635 systemd-logind[1517]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:52:30.942533 systemd-logind[1517]: Removed session 22. Feb 13 20:52:34.874429 kubelet[2663]: E0213 20:52:34.874233 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:34.875205 containerd[1534]: time="2025-02-13T20:52:34.875152013Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:52:35.946936 systemd[1]: Started sshd@22-10.0.0.7:22-10.0.0.1:55860.service - OpenSSH per-connection server daemon (10.0.0.1:55860). Feb 13 20:52:35.979676 sshd[3267]: Accepted publickey for core from 10.0.0.1 port 55860 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:35.980806 sshd[3267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:35.984342 systemd-logind[1517]: New session 23 of user core. Feb 13 20:52:35.990765 containerd[1534]: time="2025-02-13T20:52:35.990714599Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:52:35.992133 containerd[1534]: time="2025-02-13T20:52:35.990737719Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:52:35.991740 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:52:35.992230 kubelet[2663]: E0213 20:52:35.990937 2663 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:52:35.992230 kubelet[2663]: E0213 20:52:35.990997 2663 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:52:35.992510 kubelet[2663]: E0213 20:52:35.991078 2663 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfhf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-6mvg6_kube-flannel(cd690012-36a5-4d95-b540-563eafe34300): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:52:35.992592 kubelet[2663]: E0213 20:52:35.991110 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:52:36.101781 sshd[3267]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:36.104926 systemd[1]: sshd@22-10.0.0.7:22-10.0.0.1:55860.service: Deactivated successfully. Feb 13 20:52:36.108409 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:52:36.109090 systemd-logind[1517]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:52:36.109860 systemd-logind[1517]: Removed session 23. Feb 13 20:52:41.114721 systemd[1]: Started sshd@23-10.0.0.7:22-10.0.0.1:55870.service - OpenSSH per-connection server daemon (10.0.0.1:55870). Feb 13 20:52:41.146854 sshd[3284]: Accepted publickey for core from 10.0.0.1 port 55870 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:41.147985 sshd[3284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:41.151436 systemd-logind[1517]: New session 24 of user core. Feb 13 20:52:41.157711 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:52:41.263978 sshd[3284]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:41.266929 systemd-logind[1517]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:52:41.267076 systemd[1]: sshd@23-10.0.0.7:22-10.0.0.1:55870.service: Deactivated successfully. Feb 13 20:52:41.269336 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:52:41.270111 systemd-logind[1517]: Removed session 24. Feb 13 20:52:41.915412 kubelet[2663]: E0213 20:52:41.915383 2663 kubelet_node_status.go:456] "Node not becoming ready in time after startup" Feb 13 20:52:41.946365 kubelet[2663]: E0213 20:52:41.946328 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:52:45.114505 update_engine[1518]: I20250213 20:52:45.114435 1518 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 20:52:45.114505 update_engine[1518]: I20250213 20:52:45.114513 1518 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 20:52:45.114935 update_engine[1518]: I20250213 20:52:45.114753 1518 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 20:52:45.115531 update_engine[1518]: I20250213 20:52:45.115108 1518 omaha_request_params.cc:62] Current group set to lts Feb 13 20:52:45.115531 update_engine[1518]: I20250213 20:52:45.115200 1518 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 20:52:45.115531 update_engine[1518]: I20250213 20:52:45.115211 1518 update_attempter.cc:643] Scheduling an action processor start. Feb 13 20:52:45.115531 update_engine[1518]: I20250213 20:52:45.115226 1518 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:52:45.115531 update_engine[1518]: I20250213 20:52:45.115254 1518 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 20:52:45.115531 update_engine[1518]: I20250213 20:52:45.115300 1518 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:52:45.115531 update_engine[1518]: I20250213 20:52:45.115307 1518 omaha_request_action.cc:272] Request: Feb 13 20:52:45.115531 update_engine[1518]: Feb 13 20:52:45.115531 update_engine[1518]: Feb 13 20:52:45.115531 update_engine[1518]: Feb 13 20:52:45.115531 update_engine[1518]: Feb 13 20:52:45.115531 update_engine[1518]: Feb 13 20:52:45.115531 update_engine[1518]: Feb 13 20:52:45.115531 update_engine[1518]: Feb 13 20:52:45.115531 update_engine[1518]: Feb 13 20:52:45.115531 update_engine[1518]: I20250213 20:52:45.115314 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:52:45.115879 locksmithd[1560]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 20:52:45.116378 update_engine[1518]: I20250213 20:52:45.116338 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:52:45.116628 update_engine[1518]: I20250213 20:52:45.116595 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:52:45.127609 update_engine[1518]: E20250213 20:52:45.127564 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:52:45.127671 update_engine[1518]: I20250213 20:52:45.127632 1518 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 20:52:46.277720 systemd[1]: Started sshd@24-10.0.0.7:22-10.0.0.1:33978.service - OpenSSH per-connection server daemon (10.0.0.1:33978). Feb 13 20:52:46.310297 sshd[3303]: Accepted publickey for core from 10.0.0.1 port 33978 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:46.311391 sshd[3303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:46.314847 systemd-logind[1517]: New session 25 of user core. Feb 13 20:52:46.321690 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:52:46.426997 sshd[3303]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:46.430636 systemd[1]: sshd@24-10.0.0.7:22-10.0.0.1:33978.service: Deactivated successfully. Feb 13 20:52:46.432430 systemd-logind[1517]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:52:46.432554 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:52:46.433347 systemd-logind[1517]: Removed session 25. Feb 13 20:52:46.874143 kubelet[2663]: E0213 20:52:46.874114 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:46.874957 kubelet[2663]: E0213 20:52:46.874913 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:52:46.947743 kubelet[2663]: E0213 20:52:46.947686 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:52:51.437702 systemd[1]: Started sshd@25-10.0.0.7:22-10.0.0.1:33994.service - OpenSSH per-connection server daemon (10.0.0.1:33994). Feb 13 20:52:51.469939 sshd[3320]: Accepted publickey for core from 10.0.0.1 port 33994 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:51.471081 sshd[3320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:51.475029 systemd-logind[1517]: New session 26 of user core. Feb 13 20:52:51.483795 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:52:51.589750 sshd[3320]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:51.592692 systemd[1]: sshd@25-10.0.0.7:22-10.0.0.1:33994.service: Deactivated successfully. Feb 13 20:52:51.595094 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:52:51.595660 systemd-logind[1517]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:52:51.596406 systemd-logind[1517]: Removed session 26. Feb 13 20:52:51.948414 kubelet[2663]: E0213 20:52:51.948383 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:52:55.113258 update_engine[1518]: I20250213 20:52:55.113102 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:52:55.113714 update_engine[1518]: I20250213 20:52:55.113396 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:52:55.113714 update_engine[1518]: I20250213 20:52:55.113590 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:52:55.154046 update_engine[1518]: E20250213 20:52:55.153987 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:52:55.154130 update_engine[1518]: I20250213 20:52:55.154064 1518 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 20:52:56.604783 systemd[1]: Started sshd@26-10.0.0.7:22-10.0.0.1:47132.service - OpenSSH per-connection server daemon (10.0.0.1:47132). Feb 13 20:52:56.636796 sshd[3337]: Accepted publickey for core from 10.0.0.1 port 47132 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:56.638290 sshd[3337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:56.642132 systemd-logind[1517]: New session 27 of user core. Feb 13 20:52:56.649707 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:52:56.756158 sshd[3337]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:56.759334 systemd[1]: sshd@26-10.0.0.7:22-10.0.0.1:47132.service: Deactivated successfully. Feb 13 20:52:56.761356 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:52:56.761713 systemd-logind[1517]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:52:56.762715 systemd-logind[1517]: Removed session 27. Feb 13 20:52:56.950455 kubelet[2663]: E0213 20:52:56.950035 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:52:57.874057 kubelet[2663]: E0213 20:52:57.874008 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:57.875232 kubelet[2663]: E0213 20:52:57.875161 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:53:01.766721 systemd[1]: Started sshd@27-10.0.0.7:22-10.0.0.1:47136.service - OpenSSH per-connection server daemon (10.0.0.1:47136). Feb 13 20:53:01.798564 sshd[3356]: Accepted publickey for core from 10.0.0.1 port 47136 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:01.799728 sshd[3356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:01.804852 systemd-logind[1517]: New session 28 of user core. Feb 13 20:53:01.824000 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:53:01.928471 sshd[3356]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:01.931470 systemd[1]: sshd@27-10.0.0.7:22-10.0.0.1:47136.service: Deactivated successfully. Feb 13 20:53:01.933586 systemd-logind[1517]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:53:01.933660 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:53:01.934478 systemd-logind[1517]: Removed session 28. Feb 13 20:53:01.950924 kubelet[2663]: E0213 20:53:01.950884 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:05.113657 update_engine[1518]: I20250213 20:53:05.113571 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:53:05.114093 update_engine[1518]: I20250213 20:53:05.113865 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:53:05.114093 update_engine[1518]: I20250213 20:53:05.114022 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:53:05.122404 update_engine[1518]: E20250213 20:53:05.122370 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:53:05.122469 update_engine[1518]: I20250213 20:53:05.122443 1518 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 20:53:06.939719 systemd[1]: Started sshd@28-10.0.0.7:22-10.0.0.1:39904.service - OpenSSH per-connection server daemon (10.0.0.1:39904). Feb 13 20:53:06.952376 kubelet[2663]: E0213 20:53:06.952328 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:06.971969 sshd[3373]: Accepted publickey for core from 10.0.0.1 port 39904 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:06.973153 sshd[3373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:06.977032 systemd-logind[1517]: New session 29 of user core. Feb 13 20:53:06.987723 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:53:07.092553 sshd[3373]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:07.095648 systemd[1]: sshd@28-10.0.0.7:22-10.0.0.1:39904.service: Deactivated successfully. Feb 13 20:53:07.097632 systemd-logind[1517]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:53:07.097709 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:53:07.098756 systemd-logind[1517]: Removed session 29. Feb 13 20:53:07.874849 kubelet[2663]: E0213 20:53:07.874757 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:10.873936 kubelet[2663]: E0213 20:53:10.873886 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:10.874563 kubelet[2663]: E0213 20:53:10.874531 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:53:11.953169 kubelet[2663]: E0213 20:53:11.953122 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:12.101745 systemd[1]: Started sshd@29-10.0.0.7:22-10.0.0.1:39918.service - OpenSSH per-connection server daemon (10.0.0.1:39918). Feb 13 20:53:12.133895 sshd[3390]: Accepted publickey for core from 10.0.0.1 port 39918 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:12.135029 sshd[3390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:12.138364 systemd-logind[1517]: New session 30 of user core. Feb 13 20:53:12.155843 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 20:53:12.261374 sshd[3390]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:12.264852 systemd[1]: sshd@29-10.0.0.7:22-10.0.0.1:39918.service: Deactivated successfully. Feb 13 20:53:12.266973 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 20:53:12.267503 systemd-logind[1517]: Session 30 logged out. Waiting for processes to exit. Feb 13 20:53:12.268277 systemd-logind[1517]: Removed session 30. Feb 13 20:53:12.874792 kubelet[2663]: E0213 20:53:12.874711 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:15.113373 update_engine[1518]: I20250213 20:53:15.113282 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:53:15.113767 update_engine[1518]: I20250213 20:53:15.113610 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:53:15.113794 update_engine[1518]: I20250213 20:53:15.113770 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:53:15.142562 update_engine[1518]: E20250213 20:53:15.142517 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:53:15.142655 update_engine[1518]: I20250213 20:53:15.142577 1518 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:53:15.142655 update_engine[1518]: I20250213 20:53:15.142587 1518 omaha_request_action.cc:617] Omaha request response: Feb 13 20:53:15.142699 update_engine[1518]: E20250213 20:53:15.142659 1518 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 20:53:15.142699 update_engine[1518]: I20250213 20:53:15.142678 1518 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 20:53:15.142699 update_engine[1518]: I20250213 20:53:15.142683 1518 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:53:15.142699 update_engine[1518]: I20250213 20:53:15.142688 1518 update_attempter.cc:306] Processing Done. Feb 13 20:53:15.142779 update_engine[1518]: E20250213 20:53:15.142702 1518 update_attempter.cc:619] Update failed. Feb 13 20:53:15.142779 update_engine[1518]: I20250213 20:53:15.142707 1518 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 20:53:15.142779 update_engine[1518]: I20250213 20:53:15.142711 1518 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 20:53:15.142779 update_engine[1518]: I20250213 20:53:15.142717 1518 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 20:53:15.142859 update_engine[1518]: I20250213 20:53:15.142783 1518 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:53:15.142859 update_engine[1518]: I20250213 20:53:15.142804 1518 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:53:15.142859 update_engine[1518]: I20250213 20:53:15.142810 1518 omaha_request_action.cc:272] Request: Feb 13 20:53:15.142859 update_engine[1518]: Feb 13 20:53:15.142859 update_engine[1518]: Feb 13 20:53:15.142859 update_engine[1518]: Feb 13 20:53:15.142859 update_engine[1518]: Feb 13 20:53:15.142859 update_engine[1518]: Feb 13 20:53:15.142859 update_engine[1518]: Feb 13 20:53:15.142859 update_engine[1518]: I20250213 20:53:15.142815 1518 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:53:15.143040 update_engine[1518]: I20250213 20:53:15.142987 1518 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:53:15.143178 update_engine[1518]: I20250213 20:53:15.143138 1518 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:53:15.144912 locksmithd[1560]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 20:53:15.170443 update_engine[1518]: E20250213 20:53:15.170394 1518 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:53:15.170509 update_engine[1518]: I20250213 20:53:15.170470 1518 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:53:15.170549 update_engine[1518]: I20250213 20:53:15.170511 1518 omaha_request_action.cc:617] Omaha request response: Feb 13 20:53:15.170549 update_engine[1518]: I20250213 20:53:15.170526 1518 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:53:15.170549 update_engine[1518]: I20250213 20:53:15.170535 1518 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:53:15.170549 update_engine[1518]: I20250213 20:53:15.170543 1518 update_attempter.cc:306] Processing Done. Feb 13 20:53:15.170628 update_engine[1518]: I20250213 20:53:15.170553 1518 update_attempter.cc:310] Error event sent. Feb 13 20:53:15.170628 update_engine[1518]: I20250213 20:53:15.170567 1518 update_check_scheduler.cc:74] Next update check in 49m26s Feb 13 20:53:15.170825 locksmithd[1560]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 20:53:16.873842 kubelet[2663]: E0213 20:53:16.873800 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:16.954077 kubelet[2663]: E0213 20:53:16.954041 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:17.271707 systemd[1]: Started sshd@30-10.0.0.7:22-10.0.0.1:42130.service - OpenSSH per-connection server daemon (10.0.0.1:42130). Feb 13 20:53:17.304838 sshd[3408]: Accepted publickey for core from 10.0.0.1 port 42130 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:17.306051 sshd[3408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:17.309694 systemd-logind[1517]: New session 31 of user core. Feb 13 20:53:17.316693 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 20:53:17.423813 sshd[3408]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:17.426475 systemd[1]: sshd@30-10.0.0.7:22-10.0.0.1:42130.service: Deactivated successfully. Feb 13 20:53:17.429122 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 20:53:17.429459 systemd-logind[1517]: Session 31 logged out. Waiting for processes to exit. Feb 13 20:53:17.430414 systemd-logind[1517]: Removed session 31. Feb 13 20:53:21.955475 kubelet[2663]: E0213 20:53:21.955423 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:22.435735 systemd[1]: Started sshd@31-10.0.0.7:22-10.0.0.1:42138.service - OpenSSH per-connection server daemon (10.0.0.1:42138). Feb 13 20:53:22.467818 sshd[3424]: Accepted publickey for core from 10.0.0.1 port 42138 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:22.468947 sshd[3424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:22.472558 systemd-logind[1517]: New session 32 of user core. Feb 13 20:53:22.482837 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 20:53:22.588181 sshd[3424]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:22.591020 systemd[1]: sshd@31-10.0.0.7:22-10.0.0.1:42138.service: Deactivated successfully. Feb 13 20:53:22.592888 systemd-logind[1517]: Session 32 logged out. Waiting for processes to exit. Feb 13 20:53:22.592954 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 20:53:22.593775 systemd-logind[1517]: Removed session 32. Feb 13 20:53:23.874661 kubelet[2663]: E0213 20:53:23.874616 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:23.875572 kubelet[2663]: E0213 20:53:23.875347 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:53:26.957085 kubelet[2663]: E0213 20:53:26.957041 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:27.603809 systemd[1]: Started sshd@32-10.0.0.7:22-10.0.0.1:52684.service - OpenSSH per-connection server daemon (10.0.0.1:52684). Feb 13 20:53:27.635928 sshd[3441]: Accepted publickey for core from 10.0.0.1 port 52684 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:27.637055 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:27.641012 systemd-logind[1517]: New session 33 of user core. Feb 13 20:53:27.651813 systemd[1]: Started session-33.scope - Session 33 of User core. Feb 13 20:53:27.755942 sshd[3441]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:27.758524 systemd[1]: sshd@32-10.0.0.7:22-10.0.0.1:52684.service: Deactivated successfully. Feb 13 20:53:27.761198 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 20:53:27.761735 systemd-logind[1517]: Session 33 logged out. Waiting for processes to exit. Feb 13 20:53:27.762595 systemd-logind[1517]: Removed session 33. Feb 13 20:53:29.874453 kubelet[2663]: E0213 20:53:29.874398 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:31.958014 kubelet[2663]: E0213 20:53:31.957958 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:32.767829 systemd[1]: Started sshd@33-10.0.0.7:22-10.0.0.1:54978.service - OpenSSH per-connection server daemon (10.0.0.1:54978). Feb 13 20:53:32.799532 sshd[3459]: Accepted publickey for core from 10.0.0.1 port 54978 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:32.800685 sshd[3459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:32.804185 systemd-logind[1517]: New session 34 of user core. Feb 13 20:53:32.818719 systemd[1]: Started session-34.scope - Session 34 of User core. Feb 13 20:53:32.922176 sshd[3459]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:32.926063 systemd[1]: sshd@33-10.0.0.7:22-10.0.0.1:54978.service: Deactivated successfully. Feb 13 20:53:32.927921 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 20:53:32.928249 systemd-logind[1517]: Session 34 logged out. Waiting for processes to exit. Feb 13 20:53:32.929010 systemd-logind[1517]: Removed session 34. Feb 13 20:53:36.874210 kubelet[2663]: E0213 20:53:36.873989 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:36.874780 kubelet[2663]: E0213 20:53:36.874688 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:53:36.958567 kubelet[2663]: E0213 20:53:36.958528 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:37.939795 systemd[1]: Started sshd@34-10.0.0.7:22-10.0.0.1:54990.service - OpenSSH per-connection server daemon (10.0.0.1:54990). Feb 13 20:53:37.971842 sshd[3476]: Accepted publickey for core from 10.0.0.1 port 54990 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:37.972963 sshd[3476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:37.976921 systemd-logind[1517]: New session 35 of user core. Feb 13 20:53:37.983831 systemd[1]: Started session-35.scope - Session 35 of User core. Feb 13 20:53:38.087525 sshd[3476]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:38.090755 systemd[1]: sshd@34-10.0.0.7:22-10.0.0.1:54990.service: Deactivated successfully. Feb 13 20:53:38.092561 systemd-logind[1517]: Session 35 logged out. Waiting for processes to exit. Feb 13 20:53:38.092602 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 20:53:38.093307 systemd-logind[1517]: Removed session 35. Feb 13 20:53:41.959843 kubelet[2663]: E0213 20:53:41.959808 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:43.097720 systemd[1]: Started sshd@35-10.0.0.7:22-10.0.0.1:55060.service - OpenSSH per-connection server daemon (10.0.0.1:55060). Feb 13 20:53:43.130565 sshd[3495]: Accepted publickey for core from 10.0.0.1 port 55060 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:43.131134 sshd[3495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:43.134683 systemd-logind[1517]: New session 36 of user core. Feb 13 20:53:43.145727 systemd[1]: Started session-36.scope - Session 36 of User core. Feb 13 20:53:43.251658 sshd[3495]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:43.254468 systemd-logind[1517]: Session 36 logged out. Waiting for processes to exit. Feb 13 20:53:43.254647 systemd[1]: sshd@35-10.0.0.7:22-10.0.0.1:55060.service: Deactivated successfully. Feb 13 20:53:43.257067 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 20:53:43.257993 systemd-logind[1517]: Removed session 36. Feb 13 20:53:46.960942 kubelet[2663]: E0213 20:53:46.960906 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:48.264713 systemd[1]: Started sshd@36-10.0.0.7:22-10.0.0.1:55068.service - OpenSSH per-connection server daemon (10.0.0.1:55068). Feb 13 20:53:48.296923 sshd[3512]: Accepted publickey for core from 10.0.0.1 port 55068 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:48.298040 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:48.301674 systemd-logind[1517]: New session 37 of user core. Feb 13 20:53:48.313713 systemd[1]: Started session-37.scope - Session 37 of User core. Feb 13 20:53:48.423741 sshd[3512]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:48.426358 systemd[1]: sshd@36-10.0.0.7:22-10.0.0.1:55068.service: Deactivated successfully. Feb 13 20:53:48.429088 systemd-logind[1517]: Session 37 logged out. Waiting for processes to exit. Feb 13 20:53:48.429197 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 20:53:48.430309 systemd-logind[1517]: Removed session 37. Feb 13 20:53:51.874463 kubelet[2663]: E0213 20:53:51.874104 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:51.875109 kubelet[2663]: E0213 20:53:51.875062 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:53:51.961982 kubelet[2663]: E0213 20:53:51.961948 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:53.432709 systemd[1]: Started sshd@37-10.0.0.7:22-10.0.0.1:60512.service - OpenSSH per-connection server daemon (10.0.0.1:60512). Feb 13 20:53:53.464695 sshd[3529]: Accepted publickey for core from 10.0.0.1 port 60512 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:53.465810 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:53.469552 systemd-logind[1517]: New session 38 of user core. Feb 13 20:53:53.479740 systemd[1]: Started session-38.scope - Session 38 of User core. Feb 13 20:53:53.586126 sshd[3529]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:53.589128 systemd[1]: sshd@37-10.0.0.7:22-10.0.0.1:60512.service: Deactivated successfully. Feb 13 20:53:53.591042 systemd-logind[1517]: Session 38 logged out. Waiting for processes to exit. Feb 13 20:53:53.591138 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 20:53:53.592278 systemd-logind[1517]: Removed session 38. Feb 13 20:53:56.963481 kubelet[2663]: E0213 20:53:56.963439 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:58.594712 systemd[1]: Started sshd@38-10.0.0.7:22-10.0.0.1:60522.service - OpenSSH per-connection server daemon (10.0.0.1:60522). Feb 13 20:53:58.626732 sshd[3547]: Accepted publickey for core from 10.0.0.1 port 60522 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:58.627865 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:58.631642 systemd-logind[1517]: New session 39 of user core. Feb 13 20:53:58.641793 systemd[1]: Started session-39.scope - Session 39 of User core. Feb 13 20:53:58.752155 sshd[3547]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:58.759798 systemd-logind[1517]: Session 39 logged out. Waiting for processes to exit. Feb 13 20:53:58.759912 systemd[1]: sshd@38-10.0.0.7:22-10.0.0.1:60522.service: Deactivated successfully. Feb 13 20:53:58.762279 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 20:53:58.763194 systemd-logind[1517]: Removed session 39. Feb 13 20:54:01.964107 kubelet[2663]: E0213 20:54:01.964063 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:03.762700 systemd[1]: Started sshd@39-10.0.0.7:22-10.0.0.1:44170.service - OpenSSH per-connection server daemon (10.0.0.1:44170). Feb 13 20:54:03.794822 sshd[3564]: Accepted publickey for core from 10.0.0.1 port 44170 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:03.795950 sshd[3564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:03.799993 systemd-logind[1517]: New session 40 of user core. Feb 13 20:54:03.813709 systemd[1]: Started session-40.scope - Session 40 of User core. Feb 13 20:54:03.919758 sshd[3564]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:03.922835 systemd[1]: sshd@39-10.0.0.7:22-10.0.0.1:44170.service: Deactivated successfully. Feb 13 20:54:03.924866 systemd-logind[1517]: Session 40 logged out. Waiting for processes to exit. Feb 13 20:54:03.924962 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 20:54:03.926555 systemd-logind[1517]: Removed session 40. Feb 13 20:54:04.873851 kubelet[2663]: E0213 20:54:04.873816 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:04.874840 containerd[1534]: time="2025-02-13T20:54:04.874806783Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:54:05.984501 containerd[1534]: time="2025-02-13T20:54:05.984434299Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:54:05.984918 containerd[1534]: time="2025-02-13T20:54:05.984515699Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:54:05.984949 kubelet[2663]: E0213 20:54:05.984641 2663 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:54:05.984949 kubelet[2663]: E0213 20:54:05.984681 2663 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:54:05.985191 kubelet[2663]: E0213 20:54:05.984759 2663 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfhf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-6mvg6_kube-flannel(cd690012-36a5-4d95-b540-563eafe34300): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:54:05.985248 kubelet[2663]: E0213 20:54:05.984786 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:54:06.964877 kubelet[2663]: E0213 20:54:06.964836 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:08.929819 systemd[1]: Started sshd@40-10.0.0.7:22-10.0.0.1:44178.service - OpenSSH per-connection server daemon (10.0.0.1:44178). Feb 13 20:54:08.962632 sshd[3580]: Accepted publickey for core from 10.0.0.1 port 44178 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:08.963817 sshd[3580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:08.968009 systemd-logind[1517]: New session 41 of user core. Feb 13 20:54:08.982713 systemd[1]: Started session-41.scope - Session 41 of User core. Feb 13 20:54:09.088355 sshd[3580]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:09.098931 systemd[1]: Started sshd@41-10.0.0.7:22-10.0.0.1:44186.service - OpenSSH per-connection server daemon (10.0.0.1:44186). Feb 13 20:54:09.100015 systemd[1]: sshd@40-10.0.0.7:22-10.0.0.1:44178.service: Deactivated successfully. Feb 13 20:54:09.101800 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 20:54:09.103695 systemd-logind[1517]: Session 41 logged out. Waiting for processes to exit. Feb 13 20:54:09.104805 systemd-logind[1517]: Removed session 41. Feb 13 20:54:09.131069 sshd[3593]: Accepted publickey for core from 10.0.0.1 port 44186 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:09.132188 sshd[3593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:09.135848 systemd-logind[1517]: New session 42 of user core. Feb 13 20:54:09.146711 systemd[1]: Started session-42.scope - Session 42 of User core. Feb 13 20:54:09.287281 sshd[3593]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:09.295816 systemd[1]: Started sshd@42-10.0.0.7:22-10.0.0.1:44202.service - OpenSSH per-connection server daemon (10.0.0.1:44202). Feb 13 20:54:09.296657 systemd[1]: sshd@41-10.0.0.7:22-10.0.0.1:44186.service: Deactivated successfully. Feb 13 20:54:09.310816 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 20:54:09.311922 systemd-logind[1517]: Session 42 logged out. Waiting for processes to exit. Feb 13 20:54:09.313103 systemd-logind[1517]: Removed session 42. Feb 13 20:54:09.344003 sshd[3609]: Accepted publickey for core from 10.0.0.1 port 44202 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:09.345214 sshd[3609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:09.349364 systemd-logind[1517]: New session 43 of user core. Feb 13 20:54:09.356811 systemd[1]: Started session-43.scope - Session 43 of User core. Feb 13 20:54:09.467723 sshd[3609]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:09.470827 systemd[1]: sshd@42-10.0.0.7:22-10.0.0.1:44202.service: Deactivated successfully. Feb 13 20:54:09.472754 systemd-logind[1517]: Session 43 logged out. Waiting for processes to exit. Feb 13 20:54:09.472833 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 20:54:09.474283 systemd-logind[1517]: Removed session 43. Feb 13 20:54:11.966196 kubelet[2663]: E0213 20:54:11.966143 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:14.483787 systemd[1]: Started sshd@43-10.0.0.7:22-10.0.0.1:38598.service - OpenSSH per-connection server daemon (10.0.0.1:38598). Feb 13 20:54:14.515800 sshd[3628]: Accepted publickey for core from 10.0.0.1 port 38598 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:14.516953 sshd[3628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:14.522137 systemd-logind[1517]: New session 44 of user core. Feb 13 20:54:14.539721 systemd[1]: Started session-44.scope - Session 44 of User core. Feb 13 20:54:14.646821 sshd[3628]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:14.649994 systemd[1]: sshd@43-10.0.0.7:22-10.0.0.1:38598.service: Deactivated successfully. Feb 13 20:54:14.651837 systemd-logind[1517]: Session 44 logged out. Waiting for processes to exit. Feb 13 20:54:14.651914 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 20:54:14.653593 systemd-logind[1517]: Removed session 44. Feb 13 20:54:14.874888 kubelet[2663]: E0213 20:54:14.874728 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:14.875327 kubelet[2663]: E0213 20:54:14.875173 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:16.873992 kubelet[2663]: E0213 20:54:16.873948 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:16.875312 kubelet[2663]: E0213 20:54:16.875274 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:54:16.967376 kubelet[2663]: E0213 20:54:16.967323 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:19.667711 systemd[1]: Started sshd@44-10.0.0.7:22-10.0.0.1:38604.service - OpenSSH per-connection server daemon (10.0.0.1:38604). Feb 13 20:54:19.699608 sshd[3643]: Accepted publickey for core from 10.0.0.1 port 38604 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:19.702557 sshd[3643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:19.707668 systemd-logind[1517]: New session 45 of user core. Feb 13 20:54:19.722773 systemd[1]: Started session-45.scope - Session 45 of User core. Feb 13 20:54:19.832627 sshd[3643]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:19.835800 systemd[1]: sshd@44-10.0.0.7:22-10.0.0.1:38604.service: Deactivated successfully. Feb 13 20:54:19.837713 systemd-logind[1517]: Session 45 logged out. Waiting for processes to exit. Feb 13 20:54:19.837795 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 20:54:19.838564 systemd-logind[1517]: Removed session 45. Feb 13 20:54:21.968585 kubelet[2663]: E0213 20:54:21.968537 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:24.844874 systemd[1]: Started sshd@45-10.0.0.7:22-10.0.0.1:40582.service - OpenSSH per-connection server daemon (10.0.0.1:40582). Feb 13 20:54:24.876821 sshd[3658]: Accepted publickey for core from 10.0.0.1 port 40582 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:24.878002 sshd[3658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:24.881362 systemd-logind[1517]: New session 46 of user core. Feb 13 20:54:24.890826 systemd[1]: Started session-46.scope - Session 46 of User core. Feb 13 20:54:24.998040 sshd[3658]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:25.001225 systemd[1]: sshd@45-10.0.0.7:22-10.0.0.1:40582.service: Deactivated successfully. Feb 13 20:54:25.003848 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 20:54:25.004051 systemd-logind[1517]: Session 46 logged out. Waiting for processes to exit. Feb 13 20:54:25.005174 systemd-logind[1517]: Removed session 46. Feb 13 20:54:26.969632 kubelet[2663]: E0213 20:54:26.969577 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:30.016694 systemd[1]: Started sshd@46-10.0.0.7:22-10.0.0.1:40596.service - OpenSSH per-connection server daemon (10.0.0.1:40596). Feb 13 20:54:30.048921 sshd[3675]: Accepted publickey for core from 10.0.0.1 port 40596 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:30.050065 sshd[3675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:30.053350 systemd-logind[1517]: New session 47 of user core. Feb 13 20:54:30.063704 systemd[1]: Started session-47.scope - Session 47 of User core. Feb 13 20:54:30.169838 sshd[3675]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:30.172510 systemd[1]: sshd@46-10.0.0.7:22-10.0.0.1:40596.service: Deactivated successfully. Feb 13 20:54:30.175123 systemd-logind[1517]: Session 47 logged out. Waiting for processes to exit. Feb 13 20:54:30.175483 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 20:54:30.176311 systemd-logind[1517]: Removed session 47. Feb 13 20:54:31.874526 kubelet[2663]: E0213 20:54:31.873971 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:31.875177 kubelet[2663]: E0213 20:54:31.874966 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:54:31.970913 kubelet[2663]: E0213 20:54:31.970876 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:35.193746 systemd[1]: Started sshd@47-10.0.0.7:22-10.0.0.1:51546.service - OpenSSH per-connection server daemon (10.0.0.1:51546). Feb 13 20:54:35.226222 sshd[3691]: Accepted publickey for core from 10.0.0.1 port 51546 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:35.227416 sshd[3691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:35.231313 systemd-logind[1517]: New session 48 of user core. Feb 13 20:54:35.238731 systemd[1]: Started session-48.scope - Session 48 of User core. Feb 13 20:54:35.344052 sshd[3691]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:35.346534 systemd[1]: sshd@47-10.0.0.7:22-10.0.0.1:51546.service: Deactivated successfully. Feb 13 20:54:35.349381 systemd-logind[1517]: Session 48 logged out. Waiting for processes to exit. Feb 13 20:54:35.349463 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 20:54:35.351012 systemd-logind[1517]: Removed session 48. Feb 13 20:54:36.874281 kubelet[2663]: E0213 20:54:36.874188 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:36.874758 kubelet[2663]: E0213 20:54:36.874345 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:36.972572 kubelet[2663]: E0213 20:54:36.972532 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:40.360774 systemd[1]: Started sshd@48-10.0.0.7:22-10.0.0.1:51556.service - OpenSSH per-connection server daemon (10.0.0.1:51556). Feb 13 20:54:40.393050 sshd[3706]: Accepted publickey for core from 10.0.0.1 port 51556 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:40.394187 sshd[3706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:40.397735 systemd-logind[1517]: New session 49 of user core. Feb 13 20:54:40.411826 systemd[1]: Started session-49.scope - Session 49 of User core. Feb 13 20:54:40.519313 sshd[3706]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:40.522538 systemd[1]: sshd@48-10.0.0.7:22-10.0.0.1:51556.service: Deactivated successfully. Feb 13 20:54:40.524507 systemd-logind[1517]: Session 49 logged out. Waiting for processes to exit. Feb 13 20:54:40.525045 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 20:54:40.525843 systemd-logind[1517]: Removed session 49. Feb 13 20:54:41.973133 kubelet[2663]: E0213 20:54:41.973089 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:44.873560 kubelet[2663]: E0213 20:54:44.873475 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:44.874154 kubelet[2663]: E0213 20:54:44.874115 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:54:45.530717 systemd[1]: Started sshd@49-10.0.0.7:22-10.0.0.1:44560.service - OpenSSH per-connection server daemon (10.0.0.1:44560). Feb 13 20:54:45.562889 sshd[3723]: Accepted publickey for core from 10.0.0.1 port 44560 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:45.564059 sshd[3723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:45.567759 systemd-logind[1517]: New session 50 of user core. Feb 13 20:54:45.578927 systemd[1]: Started session-50.scope - Session 50 of User core. Feb 13 20:54:45.684992 sshd[3723]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:45.688166 systemd[1]: sshd@49-10.0.0.7:22-10.0.0.1:44560.service: Deactivated successfully. Feb 13 20:54:45.690052 systemd-logind[1517]: Session 50 logged out. Waiting for processes to exit. Feb 13 20:54:45.690430 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 20:54:45.691433 systemd-logind[1517]: Removed session 50. Feb 13 20:54:46.974811 kubelet[2663]: E0213 20:54:46.974745 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:50.696711 systemd[1]: Started sshd@50-10.0.0.7:22-10.0.0.1:44576.service - OpenSSH per-connection server daemon (10.0.0.1:44576). Feb 13 20:54:50.728618 sshd[3739]: Accepted publickey for core from 10.0.0.1 port 44576 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:50.729781 sshd[3739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:50.733629 systemd-logind[1517]: New session 51 of user core. Feb 13 20:54:50.740735 systemd[1]: Started session-51.scope - Session 51 of User core. Feb 13 20:54:50.849856 sshd[3739]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:50.853200 systemd[1]: sshd@50-10.0.0.7:22-10.0.0.1:44576.service: Deactivated successfully. Feb 13 20:54:50.854994 systemd-logind[1517]: Session 51 logged out. Waiting for processes to exit. Feb 13 20:54:50.855070 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 20:54:50.855847 systemd-logind[1517]: Removed session 51. Feb 13 20:54:51.975923 kubelet[2663]: E0213 20:54:51.975872 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:55.864707 systemd[1]: Started sshd@51-10.0.0.7:22-10.0.0.1:42058.service - OpenSSH per-connection server daemon (10.0.0.1:42058). Feb 13 20:54:55.896750 sshd[3755]: Accepted publickey for core from 10.0.0.1 port 42058 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:55.897931 sshd[3755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:55.901424 systemd-logind[1517]: New session 52 of user core. Feb 13 20:54:55.911691 systemd[1]: Started session-52.scope - Session 52 of User core. Feb 13 20:54:56.018746 sshd[3755]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:56.022092 systemd[1]: sshd@51-10.0.0.7:22-10.0.0.1:42058.service: Deactivated successfully. Feb 13 20:54:56.023983 systemd-logind[1517]: Session 52 logged out. Waiting for processes to exit. Feb 13 20:54:56.024082 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 20:54:56.024915 systemd-logind[1517]: Removed session 52. Feb 13 20:54:56.976573 kubelet[2663]: E0213 20:54:56.976511 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:59.874084 kubelet[2663]: E0213 20:54:59.873735 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:59.874717 kubelet[2663]: E0213 20:54:59.874679 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:55:01.031709 systemd[1]: Started sshd@52-10.0.0.7:22-10.0.0.1:42072.service - OpenSSH per-connection server daemon (10.0.0.1:42072). Feb 13 20:55:01.063717 sshd[3772]: Accepted publickey for core from 10.0.0.1 port 42072 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:01.064967 sshd[3772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:01.068228 systemd-logind[1517]: New session 53 of user core. Feb 13 20:55:01.077807 systemd[1]: Started session-53.scope - Session 53 of User core. Feb 13 20:55:01.182200 sshd[3772]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:01.185624 systemd[1]: sshd@52-10.0.0.7:22-10.0.0.1:42072.service: Deactivated successfully. Feb 13 20:55:01.187511 systemd-logind[1517]: Session 53 logged out. Waiting for processes to exit. Feb 13 20:55:01.187522 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 20:55:01.188687 systemd-logind[1517]: Removed session 53. Feb 13 20:55:01.977296 kubelet[2663]: E0213 20:55:01.977261 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:06.194709 systemd[1]: Started sshd@53-10.0.0.7:22-10.0.0.1:33704.service - OpenSSH per-connection server daemon (10.0.0.1:33704). Feb 13 20:55:06.226557 sshd[3787]: Accepted publickey for core from 10.0.0.1 port 33704 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:06.227714 sshd[3787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:06.231538 systemd-logind[1517]: New session 54 of user core. Feb 13 20:55:06.243728 systemd[1]: Started session-54.scope - Session 54 of User core. Feb 13 20:55:06.350112 sshd[3787]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:06.354846 systemd[1]: sshd@53-10.0.0.7:22-10.0.0.1:33704.service: Deactivated successfully. Feb 13 20:55:06.356653 systemd-logind[1517]: Session 54 logged out. Waiting for processes to exit. Feb 13 20:55:06.356720 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 20:55:06.357911 systemd-logind[1517]: Removed session 54. Feb 13 20:55:06.978236 kubelet[2663]: E0213 20:55:06.978187 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:11.373708 systemd[1]: Started sshd@54-10.0.0.7:22-10.0.0.1:33718.service - OpenSSH per-connection server daemon (10.0.0.1:33718). Feb 13 20:55:11.405779 sshd[3804]: Accepted publickey for core from 10.0.0.1 port 33718 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:11.406992 sshd[3804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:11.410846 systemd-logind[1517]: New session 55 of user core. Feb 13 20:55:11.419783 systemd[1]: Started session-55.scope - Session 55 of User core. Feb 13 20:55:11.525794 sshd[3804]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:11.529015 systemd[1]: sshd@54-10.0.0.7:22-10.0.0.1:33718.service: Deactivated successfully. Feb 13 20:55:11.530883 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 20:55:11.530914 systemd-logind[1517]: Session 55 logged out. Waiting for processes to exit. Feb 13 20:55:11.532174 systemd-logind[1517]: Removed session 55. Feb 13 20:55:11.978996 kubelet[2663]: E0213 20:55:11.978957 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:14.873529 kubelet[2663]: E0213 20:55:14.873395 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:14.874250 kubelet[2663]: E0213 20:55:14.874039 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:55:16.546785 systemd[1]: Started sshd@55-10.0.0.7:22-10.0.0.1:56974.service - OpenSSH per-connection server daemon (10.0.0.1:56974). Feb 13 20:55:16.579301 sshd[3819]: Accepted publickey for core from 10.0.0.1 port 56974 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:16.580482 sshd[3819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:16.584077 systemd-logind[1517]: New session 56 of user core. Feb 13 20:55:16.592817 systemd[1]: Started session-56.scope - Session 56 of User core. Feb 13 20:55:16.698647 sshd[3819]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:16.702065 systemd[1]: sshd@55-10.0.0.7:22-10.0.0.1:56974.service: Deactivated successfully. Feb 13 20:55:16.703902 systemd-logind[1517]: Session 56 logged out. Waiting for processes to exit. Feb 13 20:55:16.704363 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 20:55:16.705560 systemd-logind[1517]: Removed session 56. Feb 13 20:55:16.980638 kubelet[2663]: E0213 20:55:16.980590 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:21.709756 systemd[1]: Started sshd@56-10.0.0.7:22-10.0.0.1:56990.service - OpenSSH per-connection server daemon (10.0.0.1:56990). Feb 13 20:55:21.741375 sshd[3835]: Accepted publickey for core from 10.0.0.1 port 56990 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:21.742475 sshd[3835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:21.746245 systemd-logind[1517]: New session 57 of user core. Feb 13 20:55:21.756705 systemd[1]: Started session-57.scope - Session 57 of User core. Feb 13 20:55:21.862133 sshd[3835]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:21.864499 systemd[1]: sshd@56-10.0.0.7:22-10.0.0.1:56990.service: Deactivated successfully. Feb 13 20:55:21.867250 systemd-logind[1517]: Session 57 logged out. Waiting for processes to exit. Feb 13 20:55:21.867361 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 20:55:21.868192 systemd-logind[1517]: Removed session 57. Feb 13 20:55:21.981903 kubelet[2663]: E0213 20:55:21.981861 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:26.876706 systemd[1]: Started sshd@57-10.0.0.7:22-10.0.0.1:48254.service - OpenSSH per-connection server daemon (10.0.0.1:48254). Feb 13 20:55:26.908664 sshd[3851]: Accepted publickey for core from 10.0.0.1 port 48254 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:26.909810 sshd[3851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:26.913553 systemd-logind[1517]: New session 58 of user core. Feb 13 20:55:26.919794 systemd[1]: Started session-58.scope - Session 58 of User core. Feb 13 20:55:26.982883 kubelet[2663]: E0213 20:55:26.982826 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:27.028463 sshd[3851]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:27.030881 systemd[1]: sshd@57-10.0.0.7:22-10.0.0.1:48254.service: Deactivated successfully. Feb 13 20:55:27.033350 systemd-logind[1517]: Session 58 logged out. Waiting for processes to exit. Feb 13 20:55:27.033526 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 20:55:27.035011 systemd-logind[1517]: Removed session 58. Feb 13 20:55:27.874278 kubelet[2663]: E0213 20:55:27.874247 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:27.874880 kubelet[2663]: E0213 20:55:27.874854 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:55:31.983546 kubelet[2663]: E0213 20:55:31.983509 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:32.044694 systemd[1]: Started sshd@58-10.0.0.7:22-10.0.0.1:48258.service - OpenSSH per-connection server daemon (10.0.0.1:48258). Feb 13 20:55:32.076764 sshd[3868]: Accepted publickey for core from 10.0.0.1 port 48258 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:32.077936 sshd[3868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:32.081541 systemd-logind[1517]: New session 59 of user core. Feb 13 20:55:32.093814 systemd[1]: Started session-59.scope - Session 59 of User core. Feb 13 20:55:32.198519 sshd[3868]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:32.201632 systemd[1]: sshd@58-10.0.0.7:22-10.0.0.1:48258.service: Deactivated successfully. Feb 13 20:55:32.203367 systemd-logind[1517]: Session 59 logged out. Waiting for processes to exit. Feb 13 20:55:32.203413 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 20:55:32.204389 systemd-logind[1517]: Removed session 59. Feb 13 20:55:36.874055 kubelet[2663]: E0213 20:55:36.874012 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:36.984540 kubelet[2663]: E0213 20:55:36.984477 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:37.209697 systemd[1]: Started sshd@59-10.0.0.7:22-10.0.0.1:57614.service - OpenSSH per-connection server daemon (10.0.0.1:57614). Feb 13 20:55:37.241620 sshd[3884]: Accepted publickey for core from 10.0.0.1 port 57614 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:37.242730 sshd[3884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:37.245932 systemd-logind[1517]: New session 60 of user core. Feb 13 20:55:37.258690 systemd[1]: Started session-60.scope - Session 60 of User core. Feb 13 20:55:37.363188 sshd[3884]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:37.366168 systemd[1]: sshd@59-10.0.0.7:22-10.0.0.1:57614.service: Deactivated successfully. Feb 13 20:55:37.368000 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 20:55:37.368010 systemd-logind[1517]: Session 60 logged out. Waiting for processes to exit. Feb 13 20:55:37.369552 systemd-logind[1517]: Removed session 60. Feb 13 20:55:38.874312 kubelet[2663]: E0213 20:55:38.874189 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:38.874933 kubelet[2663]: E0213 20:55:38.874894 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:55:41.985913 kubelet[2663]: E0213 20:55:41.985881 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:42.377696 systemd[1]: Started sshd@60-10.0.0.7:22-10.0.0.1:57616.service - OpenSSH per-connection server daemon (10.0.0.1:57616). Feb 13 20:55:42.409623 sshd[3902]: Accepted publickey for core from 10.0.0.1 port 57616 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:42.410787 sshd[3902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:42.415987 systemd-logind[1517]: New session 61 of user core. Feb 13 20:55:42.426787 systemd[1]: Started session-61.scope - Session 61 of User core. Feb 13 20:55:42.532212 sshd[3902]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:42.535847 systemd-logind[1517]: Session 61 logged out. Waiting for processes to exit. Feb 13 20:55:42.536052 systemd[1]: sshd@60-10.0.0.7:22-10.0.0.1:57616.service: Deactivated successfully. Feb 13 20:55:42.538599 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 20:55:42.539476 systemd-logind[1517]: Removed session 61. Feb 13 20:55:42.874685 kubelet[2663]: E0213 20:55:42.874627 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:46.987760 kubelet[2663]: E0213 20:55:46.987726 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:47.542702 systemd[1]: Started sshd@61-10.0.0.7:22-10.0.0.1:45824.service - OpenSSH per-connection server daemon (10.0.0.1:45824). Feb 13 20:55:47.574650 sshd[3918]: Accepted publickey for core from 10.0.0.1 port 45824 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:47.576126 sshd[3918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:47.579571 systemd-logind[1517]: New session 62 of user core. Feb 13 20:55:47.590823 systemd[1]: Started session-62.scope - Session 62 of User core. Feb 13 20:55:47.697888 sshd[3918]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:47.700909 systemd[1]: sshd@61-10.0.0.7:22-10.0.0.1:45824.service: Deactivated successfully. Feb 13 20:55:47.702720 systemd-logind[1517]: Session 62 logged out. Waiting for processes to exit. Feb 13 20:55:47.702725 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 20:55:47.703844 systemd-logind[1517]: Removed session 62. Feb 13 20:55:49.873867 kubelet[2663]: E0213 20:55:49.873824 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:49.874247 kubelet[2663]: E0213 20:55:49.873899 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:49.874635 kubelet[2663]: E0213 20:55:49.874573 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:55:51.988695 kubelet[2663]: E0213 20:55:51.988650 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:52.709746 systemd[1]: Started sshd@62-10.0.0.7:22-10.0.0.1:37498.service - OpenSSH per-connection server daemon (10.0.0.1:37498). Feb 13 20:55:52.741707 sshd[3935]: Accepted publickey for core from 10.0.0.1 port 37498 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:52.742829 sshd[3935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:52.748561 systemd-logind[1517]: New session 63 of user core. Feb 13 20:55:52.755713 systemd[1]: Started session-63.scope - Session 63 of User core. Feb 13 20:55:52.860606 sshd[3935]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:52.863901 systemd[1]: sshd@62-10.0.0.7:22-10.0.0.1:37498.service: Deactivated successfully. Feb 13 20:55:52.865880 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 20:55:52.865891 systemd-logind[1517]: Session 63 logged out. Waiting for processes to exit. Feb 13 20:55:52.867679 systemd-logind[1517]: Removed session 63. Feb 13 20:55:56.990417 kubelet[2663]: E0213 20:55:56.990327 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:57.877991 systemd[1]: Started sshd@63-10.0.0.7:22-10.0.0.1:37502.service - OpenSSH per-connection server daemon (10.0.0.1:37502). Feb 13 20:55:57.910464 sshd[3951]: Accepted publickey for core from 10.0.0.1 port 37502 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:57.911593 sshd[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:57.915079 systemd-logind[1517]: New session 64 of user core. Feb 13 20:55:57.923708 systemd[1]: Started session-64.scope - Session 64 of User core. Feb 13 20:55:58.028062 sshd[3951]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:58.031563 systemd[1]: sshd@63-10.0.0.7:22-10.0.0.1:37502.service: Deactivated successfully. Feb 13 20:55:58.033516 systemd-logind[1517]: Session 64 logged out. Waiting for processes to exit. Feb 13 20:55:58.033542 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 20:55:58.034636 systemd-logind[1517]: Removed session 64. Feb 13 20:56:01.874701 kubelet[2663]: E0213 20:56:01.874650 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:01.874701 kubelet[2663]: E0213 20:56:01.874652 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:01.875791 kubelet[2663]: E0213 20:56:01.875437 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:56:01.991209 kubelet[2663]: E0213 20:56:01.991113 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:03.037813 systemd[1]: Started sshd@64-10.0.0.7:22-10.0.0.1:33070.service - OpenSSH per-connection server daemon (10.0.0.1:33070). Feb 13 20:56:03.069882 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 33070 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:03.071061 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:03.074715 systemd-logind[1517]: New session 65 of user core. Feb 13 20:56:03.086717 systemd[1]: Started session-65.scope - Session 65 of User core. Feb 13 20:56:03.194229 sshd[3968]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:03.196804 systemd[1]: sshd@64-10.0.0.7:22-10.0.0.1:33070.service: Deactivated successfully. Feb 13 20:56:03.199422 systemd-logind[1517]: Session 65 logged out. Waiting for processes to exit. Feb 13 20:56:03.199476 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 20:56:03.201094 systemd-logind[1517]: Removed session 65. Feb 13 20:56:06.992391 kubelet[2663]: E0213 20:56:06.992308 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:08.210721 systemd[1]: Started sshd@65-10.0.0.7:22-10.0.0.1:33072.service - OpenSSH per-connection server daemon (10.0.0.1:33072). Feb 13 20:56:08.242860 sshd[3984]: Accepted publickey for core from 10.0.0.1 port 33072 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:08.244070 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:08.247956 systemd-logind[1517]: New session 66 of user core. Feb 13 20:56:08.259723 systemd[1]: Started session-66.scope - Session 66 of User core. Feb 13 20:56:08.365178 sshd[3984]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:08.368349 systemd[1]: sshd@65-10.0.0.7:22-10.0.0.1:33072.service: Deactivated successfully. Feb 13 20:56:08.370377 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 20:56:08.371044 systemd-logind[1517]: Session 66 logged out. Waiting for processes to exit. Feb 13 20:56:08.371882 systemd-logind[1517]: Removed session 66. Feb 13 20:56:11.993888 kubelet[2663]: E0213 20:56:11.993847 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:12.873909 kubelet[2663]: E0213 20:56:12.873782 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:12.874458 kubelet[2663]: E0213 20:56:12.874421 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:56:13.374701 systemd[1]: Started sshd@66-10.0.0.7:22-10.0.0.1:40624.service - OpenSSH per-connection server daemon (10.0.0.1:40624). Feb 13 20:56:13.406667 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 40624 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:13.408140 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:13.411449 systemd-logind[1517]: New session 67 of user core. Feb 13 20:56:13.420749 systemd[1]: Started session-67.scope - Session 67 of User core. Feb 13 20:56:13.526987 sshd[3999]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:13.530276 systemd[1]: sshd@66-10.0.0.7:22-10.0.0.1:40624.service: Deactivated successfully. Feb 13 20:56:13.532247 systemd-logind[1517]: Session 67 logged out. Waiting for processes to exit. Feb 13 20:56:13.532335 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 20:56:13.533571 systemd-logind[1517]: Removed session 67. Feb 13 20:56:16.994958 kubelet[2663]: E0213 20:56:16.994892 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:18.535693 systemd[1]: Started sshd@67-10.0.0.7:22-10.0.0.1:40640.service - OpenSSH per-connection server daemon (10.0.0.1:40640). Feb 13 20:56:18.567789 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 40640 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:18.568928 sshd[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:18.572548 systemd-logind[1517]: New session 68 of user core. Feb 13 20:56:18.588762 systemd[1]: Started session-68.scope - Session 68 of User core. Feb 13 20:56:18.696557 sshd[4015]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:18.699616 systemd[1]: sshd@67-10.0.0.7:22-10.0.0.1:40640.service: Deactivated successfully. Feb 13 20:56:18.701541 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 20:56:18.701884 systemd-logind[1517]: Session 68 logged out. Waiting for processes to exit. Feb 13 20:56:18.702828 systemd-logind[1517]: Removed session 68. Feb 13 20:56:21.995657 kubelet[2663]: E0213 20:56:21.995624 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:23.712775 systemd[1]: Started sshd@68-10.0.0.7:22-10.0.0.1:33444.service - OpenSSH per-connection server daemon (10.0.0.1:33444). Feb 13 20:56:23.744469 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 33444 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:23.745692 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:23.749919 systemd-logind[1517]: New session 69 of user core. Feb 13 20:56:23.761716 systemd[1]: Started session-69.scope - Session 69 of User core. Feb 13 20:56:23.867369 sshd[4031]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:23.870528 systemd[1]: sshd@68-10.0.0.7:22-10.0.0.1:33444.service: Deactivated successfully. Feb 13 20:56:23.872416 systemd-logind[1517]: Session 69 logged out. Waiting for processes to exit. Feb 13 20:56:23.872476 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 20:56:23.873778 systemd-logind[1517]: Removed session 69. Feb 13 20:56:24.873776 kubelet[2663]: E0213 20:56:24.873734 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:24.874357 kubelet[2663]: E0213 20:56:24.874313 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:56:26.996872 kubelet[2663]: E0213 20:56:26.996825 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:28.881709 systemd[1]: Started sshd@69-10.0.0.7:22-10.0.0.1:33446.service - OpenSSH per-connection server daemon (10.0.0.1:33446). Feb 13 20:56:28.915554 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 33446 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:28.916662 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:28.920024 systemd-logind[1517]: New session 70 of user core. Feb 13 20:56:28.929788 systemd[1]: Started session-70.scope - Session 70 of User core. Feb 13 20:56:29.036702 sshd[4049]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:29.040161 systemd[1]: sshd@69-10.0.0.7:22-10.0.0.1:33446.service: Deactivated successfully. Feb 13 20:56:29.042071 systemd-logind[1517]: Session 70 logged out. Waiting for processes to exit. Feb 13 20:56:29.042094 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 20:56:29.043540 systemd-logind[1517]: Removed session 70. Feb 13 20:56:31.997482 kubelet[2663]: E0213 20:56:31.997437 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:34.048707 systemd[1]: Started sshd@70-10.0.0.7:22-10.0.0.1:46350.service - OpenSSH per-connection server daemon (10.0.0.1:46350). Feb 13 20:56:34.080657 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 46350 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:34.081837 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:34.085701 systemd-logind[1517]: New session 71 of user core. Feb 13 20:56:34.100805 systemd[1]: Started session-71.scope - Session 71 of User core. Feb 13 20:56:34.207803 sshd[4064]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:34.210242 systemd[1]: sshd@70-10.0.0.7:22-10.0.0.1:46350.service: Deactivated successfully. Feb 13 20:56:34.212796 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 20:56:34.213134 systemd-logind[1517]: Session 71 logged out. Waiting for processes to exit. Feb 13 20:56:34.214789 systemd-logind[1517]: Removed session 71. Feb 13 20:56:36.999040 kubelet[2663]: E0213 20:56:36.999001 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:39.219934 systemd[1]: Started sshd@71-10.0.0.7:22-10.0.0.1:46360.service - OpenSSH per-connection server daemon (10.0.0.1:46360). Feb 13 20:56:39.251829 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 46360 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:39.252975 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:39.256535 systemd-logind[1517]: New session 72 of user core. Feb 13 20:56:39.268713 systemd[1]: Started session-72.scope - Session 72 of User core. Feb 13 20:56:39.373779 sshd[4079]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:39.377192 systemd[1]: sshd@71-10.0.0.7:22-10.0.0.1:46360.service: Deactivated successfully. Feb 13 20:56:39.379225 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 20:56:39.379398 systemd-logind[1517]: Session 72 logged out. Waiting for processes to exit. Feb 13 20:56:39.380645 systemd-logind[1517]: Removed session 72. Feb 13 20:56:39.873832 kubelet[2663]: E0213 20:56:39.873798 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:39.874860 kubelet[2663]: E0213 20:56:39.874820 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:56:42.000344 kubelet[2663]: E0213 20:56:42.000306 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:44.385717 systemd[1]: Started sshd@72-10.0.0.7:22-10.0.0.1:46534.service - OpenSSH per-connection server daemon (10.0.0.1:46534). Feb 13 20:56:44.417643 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 46534 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:44.419063 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:44.422697 systemd-logind[1517]: New session 73 of user core. Feb 13 20:56:44.432716 systemd[1]: Started session-73.scope - Session 73 of User core. Feb 13 20:56:44.538744 sshd[4096]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:44.541130 systemd[1]: sshd@72-10.0.0.7:22-10.0.0.1:46534.service: Deactivated successfully. Feb 13 20:56:44.543668 systemd-logind[1517]: Session 73 logged out. Waiting for processes to exit. Feb 13 20:56:44.543843 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 20:56:44.544862 systemd-logind[1517]: Removed session 73. Feb 13 20:56:47.001861 kubelet[2663]: E0213 20:56:47.001816 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:48.874285 kubelet[2663]: E0213 20:56:48.874246 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:49.548717 systemd[1]: Started sshd@73-10.0.0.7:22-10.0.0.1:46542.service - OpenSSH per-connection server daemon (10.0.0.1:46542). Feb 13 20:56:49.580799 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 46542 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:49.581967 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:49.585263 systemd-logind[1517]: New session 74 of user core. Feb 13 20:56:49.593800 systemd[1]: Started session-74.scope - Session 74 of User core. Feb 13 20:56:49.698788 sshd[4111]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:49.701750 systemd[1]: sshd@73-10.0.0.7:22-10.0.0.1:46542.service: Deactivated successfully. Feb 13 20:56:49.703880 systemd-logind[1517]: Session 74 logged out. Waiting for processes to exit. Feb 13 20:56:49.703969 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 20:56:49.705920 systemd-logind[1517]: Removed session 74. Feb 13 20:56:52.003128 kubelet[2663]: E0213 20:56:52.003085 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:53.874476 kubelet[2663]: E0213 20:56:53.874418 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:53.875828 containerd[1534]: time="2025-02-13T20:56:53.875791789Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:56:54.708705 systemd[1]: Started sshd@74-10.0.0.7:22-10.0.0.1:41102.service - OpenSSH per-connection server daemon (10.0.0.1:41102). Feb 13 20:56:54.740821 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 41102 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:54.741976 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:54.745546 systemd-logind[1517]: New session 75 of user core. Feb 13 20:56:54.751739 systemd[1]: Started session-75.scope - Session 75 of User core. Feb 13 20:56:54.858213 sshd[4127]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:54.860652 systemd[1]: sshd@74-10.0.0.7:22-10.0.0.1:41102.service: Deactivated successfully. Feb 13 20:56:54.863048 systemd-logind[1517]: Session 75 logged out. Waiting for processes to exit. Feb 13 20:56:54.863192 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 20:56:54.864238 systemd-logind[1517]: Removed session 75. Feb 13 20:56:54.986556 containerd[1534]: time="2025-02-13T20:56:54.986501059Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:56:54.987000 containerd[1534]: time="2025-02-13T20:56:54.986509739Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:56:54.987029 kubelet[2663]: E0213 20:56:54.986747 2663 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:56:54.987029 kubelet[2663]: E0213 20:56:54.986795 2663 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:56:54.987261 kubelet[2663]: E0213 20:56:54.986894 2663 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfhf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-6mvg6_kube-flannel(cd690012-36a5-4d95-b540-563eafe34300): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:56:54.987320 kubelet[2663]: E0213 20:56:54.986926 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:56:57.003984 kubelet[2663]: E0213 20:56:57.003927 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:59.876721 systemd[1]: Started sshd@75-10.0.0.7:22-10.0.0.1:41118.service - OpenSSH per-connection server daemon (10.0.0.1:41118). Feb 13 20:56:59.908788 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 41118 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:59.909957 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:59.913248 systemd-logind[1517]: New session 76 of user core. Feb 13 20:56:59.929765 systemd[1]: Started session-76.scope - Session 76 of User core. Feb 13 20:57:00.036832 sshd[4144]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:00.039928 systemd[1]: sshd@75-10.0.0.7:22-10.0.0.1:41118.service: Deactivated successfully. Feb 13 20:57:00.041809 systemd-logind[1517]: Session 76 logged out. Waiting for processes to exit. Feb 13 20:57:00.041875 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 20:57:00.043025 systemd-logind[1517]: Removed session 76. Feb 13 20:57:02.005162 kubelet[2663]: E0213 20:57:02.005129 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:03.874819 kubelet[2663]: E0213 20:57:03.874778 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:05.044711 systemd[1]: Started sshd@76-10.0.0.7:22-10.0.0.1:43398.service - OpenSSH per-connection server daemon (10.0.0.1:43398). Feb 13 20:57:05.076992 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 43398 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:05.078196 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:05.081517 systemd-logind[1517]: New session 77 of user core. Feb 13 20:57:05.092724 systemd[1]: Started session-77.scope - Session 77 of User core. Feb 13 20:57:05.199783 sshd[4160]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:05.203180 systemd[1]: sshd@76-10.0.0.7:22-10.0.0.1:43398.service: Deactivated successfully. Feb 13 20:57:05.205224 systemd-logind[1517]: Session 77 logged out. Waiting for processes to exit. Feb 13 20:57:05.205254 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 20:57:05.207098 systemd-logind[1517]: Removed session 77. Feb 13 20:57:06.873508 kubelet[2663]: E0213 20:57:06.873460 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:06.874128 kubelet[2663]: E0213 20:57:06.874089 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:57:07.006876 kubelet[2663]: E0213 20:57:07.006841 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:08.873918 kubelet[2663]: E0213 20:57:08.873852 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:10.216726 systemd[1]: Started sshd@77-10.0.0.7:22-10.0.0.1:43414.service - OpenSSH per-connection server daemon (10.0.0.1:43414). Feb 13 20:57:10.248943 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 43414 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:10.250111 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:10.253349 systemd-logind[1517]: New session 78 of user core. Feb 13 20:57:10.271703 systemd[1]: Started session-78.scope - Session 78 of User core. Feb 13 20:57:10.378915 sshd[4176]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:10.389790 systemd[1]: Started sshd@78-10.0.0.7:22-10.0.0.1:43420.service - OpenSSH per-connection server daemon (10.0.0.1:43420). Feb 13 20:57:10.390142 systemd[1]: sshd@77-10.0.0.7:22-10.0.0.1:43414.service: Deactivated successfully. Feb 13 20:57:10.392906 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 20:57:10.392931 systemd-logind[1517]: Session 78 logged out. Waiting for processes to exit. Feb 13 20:57:10.394271 systemd-logind[1517]: Removed session 78. Feb 13 20:57:10.422653 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 43420 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:10.424243 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:10.427967 systemd-logind[1517]: New session 79 of user core. Feb 13 20:57:10.445796 systemd[1]: Started session-79.scope - Session 79 of User core. Feb 13 20:57:10.611237 sshd[4189]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:10.617702 systemd[1]: Started sshd@79-10.0.0.7:22-10.0.0.1:43428.service - OpenSSH per-connection server daemon (10.0.0.1:43428). Feb 13 20:57:10.618075 systemd[1]: sshd@78-10.0.0.7:22-10.0.0.1:43420.service: Deactivated successfully. Feb 13 20:57:10.620583 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 20:57:10.620913 systemd-logind[1517]: Session 79 logged out. Waiting for processes to exit. Feb 13 20:57:10.622173 systemd-logind[1517]: Removed session 79. Feb 13 20:57:10.650202 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 43428 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:10.651407 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:10.655458 systemd-logind[1517]: New session 80 of user core. Feb 13 20:57:10.663738 systemd[1]: Started session-80.scope - Session 80 of User core. Feb 13 20:57:11.784831 sshd[4203]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:11.796903 systemd[1]: Started sshd@80-10.0.0.7:22-10.0.0.1:43444.service - OpenSSH per-connection server daemon (10.0.0.1:43444). Feb 13 20:57:11.797770 systemd[1]: sshd@79-10.0.0.7:22-10.0.0.1:43428.service: Deactivated successfully. Feb 13 20:57:11.800747 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 20:57:11.802961 systemd-logind[1517]: Session 80 logged out. Waiting for processes to exit. Feb 13 20:57:11.804750 systemd-logind[1517]: Removed session 80. Feb 13 20:57:11.833963 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 43444 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:11.835363 sshd[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:11.839533 systemd-logind[1517]: New session 81 of user core. Feb 13 20:57:11.847888 systemd[1]: Started session-81.scope - Session 81 of User core. Feb 13 20:57:12.008306 kubelet[2663]: E0213 20:57:12.008262 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:12.062616 sshd[4223]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:12.070971 systemd[1]: Started sshd@81-10.0.0.7:22-10.0.0.1:43458.service - OpenSSH per-connection server daemon (10.0.0.1:43458). Feb 13 20:57:12.072282 systemd[1]: sshd@80-10.0.0.7:22-10.0.0.1:43444.service: Deactivated successfully. Feb 13 20:57:12.074718 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 20:57:12.075240 systemd-logind[1517]: Session 81 logged out. Waiting for processes to exit. Feb 13 20:57:12.076729 systemd-logind[1517]: Removed session 81. Feb 13 20:57:12.103393 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 43458 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:12.104708 sshd[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:12.108610 systemd-logind[1517]: New session 82 of user core. Feb 13 20:57:12.119783 systemd[1]: Started session-82.scope - Session 82 of User core. Feb 13 20:57:12.222963 sshd[4239]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:12.226157 systemd[1]: sshd@81-10.0.0.7:22-10.0.0.1:43458.service: Deactivated successfully. Feb 13 20:57:12.228329 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 20:57:12.228440 systemd-logind[1517]: Session 82 logged out. Waiting for processes to exit. Feb 13 20:57:12.229316 systemd-logind[1517]: Removed session 82. Feb 13 20:57:17.012646 kubelet[2663]: E0213 20:57:17.012592 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:17.234730 systemd[1]: Started sshd@82-10.0.0.7:22-10.0.0.1:50674.service - OpenSSH per-connection server daemon (10.0.0.1:50674). Feb 13 20:57:17.267051 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 50674 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:17.268262 sshd[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:17.271962 systemd-logind[1517]: New session 83 of user core. Feb 13 20:57:17.282842 systemd[1]: Started session-83.scope - Session 83 of User core. Feb 13 20:57:17.385538 sshd[4258]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:17.388882 systemd[1]: sshd@82-10.0.0.7:22-10.0.0.1:50674.service: Deactivated successfully. Feb 13 20:57:17.390835 systemd-logind[1517]: Session 83 logged out. Waiting for processes to exit. Feb 13 20:57:17.390946 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 20:57:17.391865 systemd-logind[1517]: Removed session 83. Feb 13 20:57:18.874167 kubelet[2663]: E0213 20:57:18.874128 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:19.874713 kubelet[2663]: E0213 20:57:19.874553 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:19.875238 kubelet[2663]: E0213 20:57:19.875201 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:57:22.013664 kubelet[2663]: E0213 20:57:22.013620 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:22.404731 systemd[1]: Started sshd@83-10.0.0.7:22-10.0.0.1:50682.service - OpenSSH per-connection server daemon (10.0.0.1:50682). Feb 13 20:57:22.437727 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 50682 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:22.438947 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:22.442232 systemd-logind[1517]: New session 84 of user core. Feb 13 20:57:22.456779 systemd[1]: Started session-84.scope - Session 84 of User core. Feb 13 20:57:22.560621 sshd[4275]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:22.563726 systemd[1]: sshd@83-10.0.0.7:22-10.0.0.1:50682.service: Deactivated successfully. Feb 13 20:57:22.565728 systemd-logind[1517]: Session 84 logged out. Waiting for processes to exit. Feb 13 20:57:22.565987 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 20:57:22.566758 systemd-logind[1517]: Removed session 84. Feb 13 20:57:27.014582 kubelet[2663]: E0213 20:57:27.014532 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:27.576858 systemd[1]: Started sshd@84-10.0.0.7:22-10.0.0.1:42770.service - OpenSSH per-connection server daemon (10.0.0.1:42770). Feb 13 20:57:27.608643 sshd[4291]: Accepted publickey for core from 10.0.0.1 port 42770 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:27.609809 sshd[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:27.612974 systemd-logind[1517]: New session 85 of user core. Feb 13 20:57:27.619697 systemd[1]: Started session-85.scope - Session 85 of User core. Feb 13 20:57:27.720697 sshd[4291]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:27.723091 systemd[1]: sshd@84-10.0.0.7:22-10.0.0.1:42770.service: Deactivated successfully. Feb 13 20:57:27.725418 systemd-logind[1517]: Session 85 logged out. Waiting for processes to exit. Feb 13 20:57:27.725608 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 20:57:27.726697 systemd-logind[1517]: Removed session 85. Feb 13 20:57:31.874189 kubelet[2663]: E0213 20:57:31.874143 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:31.874888 kubelet[2663]: E0213 20:57:31.874732 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:57:32.016004 kubelet[2663]: E0213 20:57:32.015971 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:32.731701 systemd[1]: Started sshd@85-10.0.0.7:22-10.0.0.1:44308.service - OpenSSH per-connection server daemon (10.0.0.1:44308). Feb 13 20:57:32.763222 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 44308 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:32.764391 sshd[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:32.767792 systemd-logind[1517]: New session 86 of user core. Feb 13 20:57:32.776716 systemd[1]: Started session-86.scope - Session 86 of User core. Feb 13 20:57:32.879379 sshd[4309]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:32.881643 systemd[1]: sshd@85-10.0.0.7:22-10.0.0.1:44308.service: Deactivated successfully. Feb 13 20:57:32.884307 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 20:57:32.884676 systemd-logind[1517]: Session 86 logged out. Waiting for processes to exit. Feb 13 20:57:32.885684 systemd-logind[1517]: Removed session 86. Feb 13 20:57:37.016686 kubelet[2663]: E0213 20:57:37.016584 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:37.888701 systemd[1]: Started sshd@86-10.0.0.7:22-10.0.0.1:44322.service - OpenSSH per-connection server daemon (10.0.0.1:44322). Feb 13 20:57:37.920843 sshd[4325]: Accepted publickey for core from 10.0.0.1 port 44322 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:37.922071 sshd[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:37.925457 systemd-logind[1517]: New session 87 of user core. Feb 13 20:57:37.939707 systemd[1]: Started session-87.scope - Session 87 of User core. Feb 13 20:57:38.039983 sshd[4325]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:38.043197 systemd[1]: sshd@86-10.0.0.7:22-10.0.0.1:44322.service: Deactivated successfully. Feb 13 20:57:38.045879 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 20:57:38.046847 systemd-logind[1517]: Session 87 logged out. Waiting for processes to exit. Feb 13 20:57:38.047888 systemd-logind[1517]: Removed session 87. Feb 13 20:57:42.017841 kubelet[2663]: E0213 20:57:42.017802 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:43.055811 systemd[1]: Started sshd@87-10.0.0.7:22-10.0.0.1:53962.service - OpenSSH per-connection server daemon (10.0.0.1:53962). Feb 13 20:57:43.088016 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 53962 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:43.089158 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:43.092955 systemd-logind[1517]: New session 88 of user core. Feb 13 20:57:43.099745 systemd[1]: Started session-88.scope - Session 88 of User core. Feb 13 20:57:43.200706 sshd[4343]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:43.203799 systemd[1]: sshd@87-10.0.0.7:22-10.0.0.1:53962.service: Deactivated successfully. Feb 13 20:57:43.205691 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 20:57:43.205700 systemd-logind[1517]: Session 88 logged out. Waiting for processes to exit. Feb 13 20:57:43.207111 systemd-logind[1517]: Removed session 88. Feb 13 20:57:46.874483 kubelet[2663]: E0213 20:57:46.874451 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:46.875220 kubelet[2663]: E0213 20:57:46.875196 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:57:47.019018 kubelet[2663]: E0213 20:57:47.018985 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:48.210700 systemd[1]: Started sshd@88-10.0.0.7:22-10.0.0.1:53972.service - OpenSSH per-connection server daemon (10.0.0.1:53972). Feb 13 20:57:48.242742 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 53972 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:48.243854 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:48.247559 systemd-logind[1517]: New session 89 of user core. Feb 13 20:57:48.254717 systemd[1]: Started session-89.scope - Session 89 of User core. Feb 13 20:57:48.359749 sshd[4359]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:48.362735 systemd[1]: sshd@88-10.0.0.7:22-10.0.0.1:53972.service: Deactivated successfully. Feb 13 20:57:48.365027 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 20:57:48.365667 systemd-logind[1517]: Session 89 logged out. Waiting for processes to exit. Feb 13 20:57:48.366397 systemd-logind[1517]: Removed session 89. Feb 13 20:57:52.019987 kubelet[2663]: E0213 20:57:52.019932 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:53.366704 systemd[1]: Started sshd@89-10.0.0.7:22-10.0.0.1:57844.service - OpenSSH per-connection server daemon (10.0.0.1:57844). Feb 13 20:57:53.399127 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 57844 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:53.400232 sshd[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:53.404051 systemd-logind[1517]: New session 90 of user core. Feb 13 20:57:53.409709 systemd[1]: Started session-90.scope - Session 90 of User core. Feb 13 20:57:53.513851 sshd[4375]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:53.516906 systemd[1]: sshd@89-10.0.0.7:22-10.0.0.1:57844.service: Deactivated successfully. Feb 13 20:57:53.518850 systemd-logind[1517]: Session 90 logged out. Waiting for processes to exit. Feb 13 20:57:53.518875 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 20:57:53.520143 systemd-logind[1517]: Removed session 90. Feb 13 20:57:57.020915 kubelet[2663]: E0213 20:57:57.020874 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:57.874517 kubelet[2663]: E0213 20:57:57.874436 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:57.875227 kubelet[2663]: E0213 20:57:57.875112 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:57:58.527701 systemd[1]: Started sshd@90-10.0.0.7:22-10.0.0.1:57858.service - OpenSSH per-connection server daemon (10.0.0.1:57858). Feb 13 20:57:58.559990 sshd[4392]: Accepted publickey for core from 10.0.0.1 port 57858 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:58.561102 sshd[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:58.564482 systemd-logind[1517]: New session 91 of user core. Feb 13 20:57:58.570780 systemd[1]: Started session-91.scope - Session 91 of User core. Feb 13 20:57:58.674300 sshd[4392]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:58.677422 systemd[1]: sshd@90-10.0.0.7:22-10.0.0.1:57858.service: Deactivated successfully. Feb 13 20:57:58.679308 systemd-logind[1517]: Session 91 logged out. Waiting for processes to exit. Feb 13 20:57:58.679316 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 20:57:58.680409 systemd-logind[1517]: Removed session 91. Feb 13 20:57:58.874051 kubelet[2663]: E0213 20:57:58.873869 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:02.022082 kubelet[2663]: E0213 20:58:02.022050 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:03.688743 systemd[1]: Started sshd@91-10.0.0.7:22-10.0.0.1:39330.service - OpenSSH per-connection server daemon (10.0.0.1:39330). Feb 13 20:58:03.720775 sshd[4409]: Accepted publickey for core from 10.0.0.1 port 39330 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:03.721924 sshd[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:03.725444 systemd-logind[1517]: New session 92 of user core. Feb 13 20:58:03.730724 systemd[1]: Started session-92.scope - Session 92 of User core. Feb 13 20:58:03.834826 sshd[4409]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:03.837206 systemd[1]: sshd@91-10.0.0.7:22-10.0.0.1:39330.service: Deactivated successfully. Feb 13 20:58:03.840096 systemd-logind[1517]: Session 92 logged out. Waiting for processes to exit. Feb 13 20:58:03.840170 systemd[1]: session-92.scope: Deactivated successfully. Feb 13 20:58:03.841358 systemd-logind[1517]: Removed session 92. Feb 13 20:58:07.023616 kubelet[2663]: E0213 20:58:07.023583 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:08.856728 systemd[1]: Started sshd@92-10.0.0.7:22-10.0.0.1:39332.service - OpenSSH per-connection server daemon (10.0.0.1:39332). Feb 13 20:58:08.889476 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 39332 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:08.890788 sshd[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:08.894568 systemd-logind[1517]: New session 93 of user core. Feb 13 20:58:08.906697 systemd[1]: Started session-93.scope - Session 93 of User core. Feb 13 20:58:09.010657 sshd[4425]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:09.013736 systemd[1]: sshd@92-10.0.0.7:22-10.0.0.1:39332.service: Deactivated successfully. Feb 13 20:58:09.015559 systemd-logind[1517]: Session 93 logged out. Waiting for processes to exit. Feb 13 20:58:09.015634 systemd[1]: session-93.scope: Deactivated successfully. Feb 13 20:58:09.016402 systemd-logind[1517]: Removed session 93. Feb 13 20:58:10.873947 kubelet[2663]: E0213 20:58:10.873902 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:10.874758 kubelet[2663]: E0213 20:58:10.874727 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:58:12.025033 kubelet[2663]: E0213 20:58:12.025001 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:14.025782 systemd[1]: Started sshd@93-10.0.0.7:22-10.0.0.1:51260.service - OpenSSH per-connection server daemon (10.0.0.1:51260). Feb 13 20:58:14.057590 sshd[4440]: Accepted publickey for core from 10.0.0.1 port 51260 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:14.058791 sshd[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:14.062195 systemd-logind[1517]: New session 94 of user core. Feb 13 20:58:14.075769 systemd[1]: Started session-94.scope - Session 94 of User core. Feb 13 20:58:14.179731 sshd[4440]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:14.183221 systemd[1]: sshd@93-10.0.0.7:22-10.0.0.1:51260.service: Deactivated successfully. Feb 13 20:58:14.184812 systemd-logind[1517]: Session 94 logged out. Waiting for processes to exit. Feb 13 20:58:14.184890 systemd[1]: session-94.scope: Deactivated successfully. Feb 13 20:58:14.185664 systemd-logind[1517]: Removed session 94. Feb 13 20:58:15.874827 kubelet[2663]: E0213 20:58:15.874785 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:17.026252 kubelet[2663]: E0213 20:58:17.026213 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:19.198704 systemd[1]: Started sshd@94-10.0.0.7:22-10.0.0.1:51262.service - OpenSSH per-connection server daemon (10.0.0.1:51262). Feb 13 20:58:19.231045 sshd[4455]: Accepted publickey for core from 10.0.0.1 port 51262 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:19.232326 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:19.235700 systemd-logind[1517]: New session 95 of user core. Feb 13 20:58:19.244766 systemd[1]: Started session-95.scope - Session 95 of User core. Feb 13 20:58:19.350508 sshd[4455]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:19.353471 systemd[1]: sshd@94-10.0.0.7:22-10.0.0.1:51262.service: Deactivated successfully. Feb 13 20:58:19.355336 systemd-logind[1517]: Session 95 logged out. Waiting for processes to exit. Feb 13 20:58:19.355393 systemd[1]: session-95.scope: Deactivated successfully. Feb 13 20:58:19.357134 systemd-logind[1517]: Removed session 95. Feb 13 20:58:21.873762 kubelet[2663]: E0213 20:58:21.873724 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:22.027079 kubelet[2663]: E0213 20:58:22.027031 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:22.874276 kubelet[2663]: E0213 20:58:22.874240 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:22.874970 kubelet[2663]: E0213 20:58:22.874947 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:58:24.362819 systemd[1]: Started sshd@95-10.0.0.7:22-10.0.0.1:47652.service - OpenSSH per-connection server daemon (10.0.0.1:47652). Feb 13 20:58:24.395075 sshd[4471]: Accepted publickey for core from 10.0.0.1 port 47652 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:24.396211 sshd[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:24.400098 systemd-logind[1517]: New session 96 of user core. Feb 13 20:58:24.407719 systemd[1]: Started session-96.scope - Session 96 of User core. Feb 13 20:58:24.512375 sshd[4471]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:24.515371 systemd[1]: sshd@95-10.0.0.7:22-10.0.0.1:47652.service: Deactivated successfully. Feb 13 20:58:24.517264 systemd-logind[1517]: Session 96 logged out. Waiting for processes to exit. Feb 13 20:58:24.517687 systemd[1]: session-96.scope: Deactivated successfully. Feb 13 20:58:24.518444 systemd-logind[1517]: Removed session 96. Feb 13 20:58:27.027797 kubelet[2663]: E0213 20:58:27.027750 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:29.523704 systemd[1]: Started sshd@96-10.0.0.7:22-10.0.0.1:47656.service - OpenSSH per-connection server daemon (10.0.0.1:47656). Feb 13 20:58:29.555965 sshd[4489]: Accepted publickey for core from 10.0.0.1 port 47656 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:29.557075 sshd[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:29.561172 systemd-logind[1517]: New session 97 of user core. Feb 13 20:58:29.572729 systemd[1]: Started session-97.scope - Session 97 of User core. Feb 13 20:58:29.694784 sshd[4489]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:29.697944 systemd-logind[1517]: Session 97 logged out. Waiting for processes to exit. Feb 13 20:58:29.698969 systemd[1]: sshd@96-10.0.0.7:22-10.0.0.1:47656.service: Deactivated successfully. Feb 13 20:58:29.701705 systemd[1]: session-97.scope: Deactivated successfully. Feb 13 20:58:29.702944 systemd-logind[1517]: Removed session 97. Feb 13 20:58:32.028884 kubelet[2663]: E0213 20:58:32.028821 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:34.710829 systemd[1]: Started sshd@97-10.0.0.7:22-10.0.0.1:55222.service - OpenSSH per-connection server daemon (10.0.0.1:55222). Feb 13 20:58:34.742988 sshd[4505]: Accepted publickey for core from 10.0.0.1 port 55222 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:34.744113 sshd[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:34.747909 systemd-logind[1517]: New session 98 of user core. Feb 13 20:58:34.757710 systemd[1]: Started session-98.scope - Session 98 of User core. Feb 13 20:58:34.862416 sshd[4505]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:34.864722 systemd[1]: sshd@97-10.0.0.7:22-10.0.0.1:55222.service: Deactivated successfully. Feb 13 20:58:34.867044 systemd-logind[1517]: Session 98 logged out. Waiting for processes to exit. Feb 13 20:58:34.867207 systemd[1]: session-98.scope: Deactivated successfully. Feb 13 20:58:34.868169 systemd-logind[1517]: Removed session 98. Feb 13 20:58:37.029806 kubelet[2663]: E0213 20:58:37.029772 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:37.873883 kubelet[2663]: E0213 20:58:37.873846 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:37.874454 kubelet[2663]: E0213 20:58:37.874425 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:58:39.876721 systemd[1]: Started sshd@98-10.0.0.7:22-10.0.0.1:55236.service - OpenSSH per-connection server daemon (10.0.0.1:55236). Feb 13 20:58:39.908563 sshd[4521]: Accepted publickey for core from 10.0.0.1 port 55236 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:39.909677 sshd[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:39.913416 systemd-logind[1517]: New session 99 of user core. Feb 13 20:58:39.918718 systemd[1]: Started session-99.scope - Session 99 of User core. Feb 13 20:58:40.024535 sshd[4521]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:40.026973 systemd[1]: sshd@98-10.0.0.7:22-10.0.0.1:55236.service: Deactivated successfully. Feb 13 20:58:40.029344 systemd-logind[1517]: Session 99 logged out. Waiting for processes to exit. Feb 13 20:58:40.029537 systemd[1]: session-99.scope: Deactivated successfully. Feb 13 20:58:40.030677 systemd-logind[1517]: Removed session 99. Feb 13 20:58:42.030881 kubelet[2663]: E0213 20:58:42.030829 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:45.038927 systemd[1]: Started sshd@99-10.0.0.7:22-10.0.0.1:42438.service - OpenSSH per-connection server daemon (10.0.0.1:42438). Feb 13 20:58:45.070460 sshd[4539]: Accepted publickey for core from 10.0.0.1 port 42438 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:45.071638 sshd[4539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:45.074844 systemd-logind[1517]: New session 100 of user core. Feb 13 20:58:45.085700 systemd[1]: Started session-100.scope - Session 100 of User core. Feb 13 20:58:45.190076 sshd[4539]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:45.192462 systemd[1]: sshd@99-10.0.0.7:22-10.0.0.1:42438.service: Deactivated successfully. Feb 13 20:58:45.194902 systemd-logind[1517]: Session 100 logged out. Waiting for processes to exit. Feb 13 20:58:45.195009 systemd[1]: session-100.scope: Deactivated successfully. Feb 13 20:58:45.196781 systemd-logind[1517]: Removed session 100. Feb 13 20:58:45.873828 kubelet[2663]: E0213 20:58:45.873794 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:47.031613 kubelet[2663]: E0213 20:58:47.031576 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:48.874067 kubelet[2663]: E0213 20:58:48.874026 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:48.874786 kubelet[2663]: E0213 20:58:48.874755 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:58:50.199834 systemd[1]: Started sshd@100-10.0.0.7:22-10.0.0.1:42442.service - OpenSSH per-connection server daemon (10.0.0.1:42442). Feb 13 20:58:50.231679 sshd[4554]: Accepted publickey for core from 10.0.0.1 port 42442 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:50.232786 sshd[4554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:50.236669 systemd-logind[1517]: New session 101 of user core. Feb 13 20:58:50.253721 systemd[1]: Started session-101.scope - Session 101 of User core. Feb 13 20:58:50.358167 sshd[4554]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:50.361540 systemd[1]: sshd@100-10.0.0.7:22-10.0.0.1:42442.service: Deactivated successfully. Feb 13 20:58:50.363596 systemd-logind[1517]: Session 101 logged out. Waiting for processes to exit. Feb 13 20:58:50.363784 systemd[1]: session-101.scope: Deactivated successfully. Feb 13 20:58:50.365031 systemd-logind[1517]: Removed session 101. Feb 13 20:58:52.032801 kubelet[2663]: E0213 20:58:52.032670 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:55.368699 systemd[1]: Started sshd@101-10.0.0.7:22-10.0.0.1:46356.service - OpenSSH per-connection server daemon (10.0.0.1:46356). Feb 13 20:58:55.400851 sshd[4569]: Accepted publickey for core from 10.0.0.1 port 46356 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:55.401959 sshd[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:55.405468 systemd-logind[1517]: New session 102 of user core. Feb 13 20:58:55.413760 systemd[1]: Started session-102.scope - Session 102 of User core. Feb 13 20:58:55.518707 sshd[4569]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:55.521594 systemd-logind[1517]: Session 102 logged out. Waiting for processes to exit. Feb 13 20:58:55.521620 systemd[1]: sshd@101-10.0.0.7:22-10.0.0.1:46356.service: Deactivated successfully. Feb 13 20:58:55.524158 systemd[1]: session-102.scope: Deactivated successfully. Feb 13 20:58:55.525200 systemd-logind[1517]: Removed session 102. Feb 13 20:58:57.034074 kubelet[2663]: E0213 20:58:57.034041 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:00.529827 systemd[1]: Started sshd@102-10.0.0.7:22-10.0.0.1:46358.service - OpenSSH per-connection server daemon (10.0.0.1:46358). Feb 13 20:59:00.562000 sshd[4586]: Accepted publickey for core from 10.0.0.1 port 46358 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:00.563092 sshd[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:00.566510 systemd-logind[1517]: New session 103 of user core. Feb 13 20:59:00.581690 systemd[1]: Started session-103.scope - Session 103 of User core. Feb 13 20:59:00.686678 sshd[4586]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:00.689836 systemd[1]: sshd@102-10.0.0.7:22-10.0.0.1:46358.service: Deactivated successfully. Feb 13 20:59:00.691703 systemd-logind[1517]: Session 103 logged out. Waiting for processes to exit. Feb 13 20:59:00.691778 systemd[1]: session-103.scope: Deactivated successfully. Feb 13 20:59:00.693394 systemd-logind[1517]: Removed session 103. Feb 13 20:59:02.035514 kubelet[2663]: E0213 20:59:02.035446 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:02.873920 kubelet[2663]: E0213 20:59:02.873883 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:02.874595 kubelet[2663]: E0213 20:59:02.874552 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:59:05.700828 systemd[1]: Started sshd@103-10.0.0.7:22-10.0.0.1:38082.service - OpenSSH per-connection server daemon (10.0.0.1:38082). Feb 13 20:59:05.732833 sshd[4601]: Accepted publickey for core from 10.0.0.1 port 38082 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:05.734022 sshd[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:05.737654 systemd-logind[1517]: New session 104 of user core. Feb 13 20:59:05.749823 systemd[1]: Started session-104.scope - Session 104 of User core. Feb 13 20:59:05.855271 sshd[4601]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:05.858048 systemd[1]: sshd@103-10.0.0.7:22-10.0.0.1:38082.service: Deactivated successfully. Feb 13 20:59:05.860653 systemd-logind[1517]: Session 104 logged out. Waiting for processes to exit. Feb 13 20:59:05.861393 systemd[1]: session-104.scope: Deactivated successfully. Feb 13 20:59:05.862292 systemd-logind[1517]: Removed session 104. Feb 13 20:59:07.036904 kubelet[2663]: E0213 20:59:07.036813 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:10.865704 systemd[1]: Started sshd@104-10.0.0.7:22-10.0.0.1:38094.service - OpenSSH per-connection server daemon (10.0.0.1:38094). Feb 13 20:59:10.897582 sshd[4617]: Accepted publickey for core from 10.0.0.1 port 38094 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:10.898717 sshd[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:10.902071 systemd-logind[1517]: New session 105 of user core. Feb 13 20:59:10.908689 systemd[1]: Started session-105.scope - Session 105 of User core. Feb 13 20:59:11.014847 sshd[4617]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:11.017820 systemd[1]: sshd@104-10.0.0.7:22-10.0.0.1:38094.service: Deactivated successfully. Feb 13 20:59:11.019734 systemd-logind[1517]: Session 105 logged out. Waiting for processes to exit. Feb 13 20:59:11.019816 systemd[1]: session-105.scope: Deactivated successfully. Feb 13 20:59:11.020852 systemd-logind[1517]: Removed session 105. Feb 13 20:59:12.038158 kubelet[2663]: E0213 20:59:12.038109 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:13.875293 kubelet[2663]: E0213 20:59:13.875239 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:15.874524 kubelet[2663]: E0213 20:59:15.874227 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:15.875080 kubelet[2663]: E0213 20:59:15.874904 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:59:16.029714 systemd[1]: Started sshd@105-10.0.0.7:22-10.0.0.1:33370.service - OpenSSH per-connection server daemon (10.0.0.1:33370). Feb 13 20:59:16.061630 sshd[4632]: Accepted publickey for core from 10.0.0.1 port 33370 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:16.062809 sshd[4632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:16.066729 systemd-logind[1517]: New session 106 of user core. Feb 13 20:59:16.072761 systemd[1]: Started session-106.scope - Session 106 of User core. Feb 13 20:59:16.176246 sshd[4632]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:16.180105 systemd[1]: sshd@105-10.0.0.7:22-10.0.0.1:33370.service: Deactivated successfully. Feb 13 20:59:16.181905 systemd-logind[1517]: Session 106 logged out. Waiting for processes to exit. Feb 13 20:59:16.181990 systemd[1]: session-106.scope: Deactivated successfully. Feb 13 20:59:16.182808 systemd-logind[1517]: Removed session 106. Feb 13 20:59:17.039259 kubelet[2663]: E0213 20:59:17.039210 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:21.191791 systemd[1]: Started sshd@106-10.0.0.7:22-10.0.0.1:33384.service - OpenSSH per-connection server daemon (10.0.0.1:33384). Feb 13 20:59:21.223719 sshd[4648]: Accepted publickey for core from 10.0.0.1 port 33384 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:21.224804 sshd[4648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:21.228186 systemd-logind[1517]: New session 107 of user core. Feb 13 20:59:21.239703 systemd[1]: Started session-107.scope - Session 107 of User core. Feb 13 20:59:21.344681 sshd[4648]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:21.348236 systemd[1]: sshd@106-10.0.0.7:22-10.0.0.1:33384.service: Deactivated successfully. Feb 13 20:59:21.349981 systemd-logind[1517]: Session 107 logged out. Waiting for processes to exit. Feb 13 20:59:21.350043 systemd[1]: session-107.scope: Deactivated successfully. Feb 13 20:59:21.351010 systemd-logind[1517]: Removed session 107. Feb 13 20:59:22.040411 kubelet[2663]: E0213 20:59:22.040349 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:26.358735 systemd[1]: Started sshd@107-10.0.0.7:22-10.0.0.1:51422.service - OpenSSH per-connection server daemon (10.0.0.1:51422). Feb 13 20:59:26.390835 sshd[4664]: Accepted publickey for core from 10.0.0.1 port 51422 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:26.391959 sshd[4664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:26.395295 systemd-logind[1517]: New session 108 of user core. Feb 13 20:59:26.403787 systemd[1]: Started session-108.scope - Session 108 of User core. Feb 13 20:59:26.507137 sshd[4664]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:26.509950 systemd[1]: sshd@107-10.0.0.7:22-10.0.0.1:51422.service: Deactivated successfully. Feb 13 20:59:26.511862 systemd-logind[1517]: Session 108 logged out. Waiting for processes to exit. Feb 13 20:59:26.511956 systemd[1]: session-108.scope: Deactivated successfully. Feb 13 20:59:26.513119 systemd-logind[1517]: Removed session 108. Feb 13 20:59:27.041114 kubelet[2663]: E0213 20:59:27.041062 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:30.873959 kubelet[2663]: E0213 20:59:30.873915 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:30.874599 kubelet[2663]: E0213 20:59:30.874565 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:59:31.521697 systemd[1]: Started sshd@108-10.0.0.7:22-10.0.0.1:51426.service - OpenSSH per-connection server daemon (10.0.0.1:51426). Feb 13 20:59:31.554134 sshd[4681]: Accepted publickey for core from 10.0.0.1 port 51426 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:31.555321 sshd[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:31.559120 systemd-logind[1517]: New session 109 of user core. Feb 13 20:59:31.572713 systemd[1]: Started session-109.scope - Session 109 of User core. Feb 13 20:59:31.676058 sshd[4681]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:31.678537 systemd[1]: sshd@108-10.0.0.7:22-10.0.0.1:51426.service: Deactivated successfully. Feb 13 20:59:31.681096 systemd-logind[1517]: Session 109 logged out. Waiting for processes to exit. Feb 13 20:59:31.681891 systemd[1]: session-109.scope: Deactivated successfully. Feb 13 20:59:31.682895 systemd-logind[1517]: Removed session 109. Feb 13 20:59:32.042352 kubelet[2663]: E0213 20:59:32.042311 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:36.690708 systemd[1]: Started sshd@109-10.0.0.7:22-10.0.0.1:42194.service - OpenSSH per-connection server daemon (10.0.0.1:42194). Feb 13 20:59:36.723161 sshd[4697]: Accepted publickey for core from 10.0.0.1 port 42194 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:36.724272 sshd[4697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:36.727461 systemd-logind[1517]: New session 110 of user core. Feb 13 20:59:36.735716 systemd[1]: Started session-110.scope - Session 110 of User core. Feb 13 20:59:36.839383 sshd[4697]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:36.842704 systemd[1]: sshd@109-10.0.0.7:22-10.0.0.1:42194.service: Deactivated successfully. Feb 13 20:59:36.844698 systemd[1]: session-110.scope: Deactivated successfully. Feb 13 20:59:36.844704 systemd-logind[1517]: Session 110 logged out. Waiting for processes to exit. Feb 13 20:59:36.846150 systemd-logind[1517]: Removed session 110. Feb 13 20:59:37.043172 kubelet[2663]: E0213 20:59:37.043128 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:39.873873 kubelet[2663]: E0213 20:59:39.873830 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:41.855724 systemd[1]: Started sshd@110-10.0.0.7:22-10.0.0.1:42196.service - OpenSSH per-connection server daemon (10.0.0.1:42196). Feb 13 20:59:41.888149 sshd[4713]: Accepted publickey for core from 10.0.0.1 port 42196 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:41.889353 sshd[4713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:41.893126 systemd-logind[1517]: New session 111 of user core. Feb 13 20:59:41.896726 systemd[1]: Started session-111.scope - Session 111 of User core. Feb 13 20:59:42.002826 sshd[4713]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:42.005478 systemd-logind[1517]: Session 111 logged out. Waiting for processes to exit. Feb 13 20:59:42.007013 systemd[1]: sshd@110-10.0.0.7:22-10.0.0.1:42196.service: Deactivated successfully. Feb 13 20:59:42.008849 systemd[1]: session-111.scope: Deactivated successfully. Feb 13 20:59:42.009939 systemd-logind[1517]: Removed session 111. Feb 13 20:59:42.043953 kubelet[2663]: E0213 20:59:42.043915 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:42.873894 kubelet[2663]: E0213 20:59:42.873790 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:42.874224 kubelet[2663]: E0213 20:59:42.874202 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:42.874430 kubelet[2663]: E0213 20:59:42.874363 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:59:46.874398 kubelet[2663]: E0213 20:59:46.874354 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:47.012715 systemd[1]: Started sshd@111-10.0.0.7:22-10.0.0.1:39164.service - OpenSSH per-connection server daemon (10.0.0.1:39164). Feb 13 20:59:47.044955 sshd[4731]: Accepted publickey for core from 10.0.0.1 port 39164 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:47.045584 kubelet[2663]: E0213 20:59:47.045546 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:47.046649 sshd[4731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:47.051087 systemd-logind[1517]: New session 112 of user core. Feb 13 20:59:47.064721 systemd[1]: Started session-112.scope - Session 112 of User core. Feb 13 20:59:47.168501 sshd[4731]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:47.170969 systemd[1]: sshd@111-10.0.0.7:22-10.0.0.1:39164.service: Deactivated successfully. Feb 13 20:59:47.173813 systemd[1]: session-112.scope: Deactivated successfully. Feb 13 20:59:47.174174 systemd-logind[1517]: Session 112 logged out. Waiting for processes to exit. Feb 13 20:59:47.175274 systemd-logind[1517]: Removed session 112. Feb 13 20:59:52.046110 kubelet[2663]: E0213 20:59:52.046069 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:52.183741 systemd[1]: Started sshd@112-10.0.0.7:22-10.0.0.1:39166.service - OpenSSH per-connection server daemon (10.0.0.1:39166). Feb 13 20:59:52.215663 sshd[4746]: Accepted publickey for core from 10.0.0.1 port 39166 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:52.216886 sshd[4746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:52.220282 systemd-logind[1517]: New session 113 of user core. Feb 13 20:59:52.227788 systemd[1]: Started session-113.scope - Session 113 of User core. Feb 13 20:59:52.331375 sshd[4746]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:52.334765 systemd[1]: sshd@112-10.0.0.7:22-10.0.0.1:39166.service: Deactivated successfully. Feb 13 20:59:52.336639 systemd-logind[1517]: Session 113 logged out. Waiting for processes to exit. Feb 13 20:59:52.336682 systemd[1]: session-113.scope: Deactivated successfully. Feb 13 20:59:52.337688 systemd-logind[1517]: Removed session 113. Feb 13 20:59:55.874550 kubelet[2663]: E0213 20:59:55.874421 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:55.875470 kubelet[2663]: E0213 20:59:55.875360 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 20:59:57.047329 kubelet[2663]: E0213 20:59:57.047279 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:57.342699 systemd[1]: Started sshd@113-10.0.0.7:22-10.0.0.1:45326.service - OpenSSH per-connection server daemon (10.0.0.1:45326). Feb 13 20:59:57.374670 sshd[4762]: Accepted publickey for core from 10.0.0.1 port 45326 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:57.375791 sshd[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:57.379091 systemd-logind[1517]: New session 114 of user core. Feb 13 20:59:57.390722 systemd[1]: Started session-114.scope - Session 114 of User core. Feb 13 20:59:57.494253 sshd[4762]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:57.497328 systemd[1]: sshd@113-10.0.0.7:22-10.0.0.1:45326.service: Deactivated successfully. Feb 13 20:59:57.499212 systemd-logind[1517]: Session 114 logged out. Waiting for processes to exit. Feb 13 20:59:57.499321 systemd[1]: session-114.scope: Deactivated successfully. Feb 13 20:59:57.500137 systemd-logind[1517]: Removed session 114. Feb 13 21:00:02.048661 kubelet[2663]: E0213 21:00:02.048613 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 21:00:02.511710 systemd[1]: Started sshd@114-10.0.0.7:22-10.0.0.1:34072.service - OpenSSH per-connection server daemon (10.0.0.1:34072). Feb 13 21:00:02.543574 sshd[4780]: Accepted publickey for core from 10.0.0.1 port 34072 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 21:00:02.544691 sshd[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:00:02.548511 systemd-logind[1517]: New session 115 of user core. Feb 13 21:00:02.559014 systemd[1]: Started session-115.scope - Session 115 of User core. Feb 13 21:00:02.663858 sshd[4780]: pam_unix(sshd:session): session closed for user core Feb 13 21:00:02.666861 systemd[1]: sshd@114-10.0.0.7:22-10.0.0.1:34072.service: Deactivated successfully. Feb 13 21:00:02.668849 systemd-logind[1517]: Session 115 logged out. Waiting for processes to exit. Feb 13 21:00:02.668888 systemd[1]: session-115.scope: Deactivated successfully. Feb 13 21:00:02.670363 systemd-logind[1517]: Removed session 115. Feb 13 21:00:07.049478 kubelet[2663]: E0213 21:00:07.049441 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 21:00:07.674702 systemd[1]: Started sshd@115-10.0.0.7:22-10.0.0.1:34088.service - OpenSSH per-connection server daemon (10.0.0.1:34088). Feb 13 21:00:07.706702 sshd[4796]: Accepted publickey for core from 10.0.0.1 port 34088 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 21:00:07.707846 sshd[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:00:07.711444 systemd-logind[1517]: New session 116 of user core. Feb 13 21:00:07.717795 systemd[1]: Started session-116.scope - Session 116 of User core. Feb 13 21:00:07.822960 sshd[4796]: pam_unix(sshd:session): session closed for user core Feb 13 21:00:07.826099 systemd[1]: sshd@115-10.0.0.7:22-10.0.0.1:34088.service: Deactivated successfully. Feb 13 21:00:07.827985 systemd-logind[1517]: Session 116 logged out. Waiting for processes to exit. Feb 13 21:00:07.828011 systemd[1]: session-116.scope: Deactivated successfully. Feb 13 21:00:07.829187 systemd-logind[1517]: Removed session 116. Feb 13 21:00:07.873699 kubelet[2663]: E0213 21:00:07.873671 2663 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 21:00:07.874416 kubelet[2663]: E0213 21:00:07.874296 2663 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-6mvg6" podUID="cd690012-36a5-4d95-b540-563eafe34300" Feb 13 21:00:12.050961 kubelet[2663]: E0213 21:00:12.050860 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 21:00:12.839730 systemd[1]: Started sshd@116-10.0.0.7:22-10.0.0.1:51460.service - OpenSSH per-connection server daemon (10.0.0.1:51460). Feb 13 21:00:12.871905 sshd[4811]: Accepted publickey for core from 10.0.0.1 port 51460 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 21:00:12.873026 sshd[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:00:12.876546 systemd-logind[1517]: New session 117 of user core. Feb 13 21:00:12.883729 systemd[1]: Started session-117.scope - Session 117 of User core. Feb 13 21:00:12.987791 sshd[4811]: pam_unix(sshd:session): session closed for user core Feb 13 21:00:12.990183 systemd[1]: sshd@116-10.0.0.7:22-10.0.0.1:51460.service: Deactivated successfully. Feb 13 21:00:12.992634 systemd-logind[1517]: Session 117 logged out. Waiting for processes to exit. Feb 13 21:00:12.993036 systemd[1]: session-117.scope: Deactivated successfully. Feb 13 21:00:12.994406 systemd-logind[1517]: Removed session 117.