Feb 13 20:16:24.915380 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:16:24.915402 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 20:16:24.915412 kernel: KASLR enabled Feb 13 20:16:24.915418 kernel: efi: EFI v2.7 by EDK II Feb 13 20:16:24.915424 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 20:16:24.915430 kernel: random: crng init done Feb 13 20:16:24.915437 kernel: ACPI: Early table checksum verification disabled Feb 13 20:16:24.915444 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 20:16:24.915450 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 20:16:24.915458 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:24.915464 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:24.915471 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:24.915477 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:24.915483 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:24.915491 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:24.915500 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:24.915511 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:24.915519 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:24.915526 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 20:16:24.915532 kernel: NUMA: Failed to initialise from firmware Feb 13 20:16:24.915539 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:16:24.915545 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 20:16:24.915552 kernel: Zone ranges: Feb 13 20:16:24.915559 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:16:24.915565 kernel: DMA32 empty Feb 13 20:16:24.915573 kernel: Normal empty Feb 13 20:16:24.915580 kernel: Movable zone start for each node Feb 13 20:16:24.915586 kernel: Early memory node ranges Feb 13 20:16:24.915593 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 20:16:24.915599 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 20:16:24.915606 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 20:16:24.915612 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 20:16:24.915619 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 20:16:24.915626 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 20:16:24.915632 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 20:16:24.915639 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:16:24.915645 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 20:16:24.915653 kernel: psci: probing for conduit method from ACPI. Feb 13 20:16:24.915660 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:16:24.915667 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:16:24.915676 kernel: psci: Trusted OS migration not required Feb 13 20:16:24.915683 kernel: psci: SMC Calling Convention v1.1 Feb 13 20:16:24.915690 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 20:16:24.915699 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:16:24.915706 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:16:24.915723 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 20:16:24.915730 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:16:24.915737 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:16:24.915744 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:16:24.915751 kernel: CPU features: detected: Spectre-v4 Feb 13 20:16:24.915758 kernel: CPU features: detected: Spectre-BHB Feb 13 20:16:24.915765 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:16:24.915772 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:16:24.915781 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:16:24.915788 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:16:24.915795 kernel: alternatives: applying boot alternatives Feb 13 20:16:24.915803 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:16:24.915810 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:16:24.915817 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:16:24.915824 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:16:24.915831 kernel: Fallback order for Node 0: 0 Feb 13 20:16:24.915838 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 20:16:24.915845 kernel: Policy zone: DMA Feb 13 20:16:24.915852 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:16:24.915860 kernel: software IO TLB: area num 4. Feb 13 20:16:24.915867 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 20:16:24.915875 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Feb 13 20:16:24.915882 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 20:16:24.915889 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:16:24.915896 kernel: rcu: RCU event tracing is enabled. Feb 13 20:16:24.915903 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 20:16:24.915910 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:16:24.915917 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:16:24.915924 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:16:24.915931 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 20:16:24.915938 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:16:24.915946 kernel: GICv3: 256 SPIs implemented Feb 13 20:16:24.915953 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:16:24.915960 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:16:24.915967 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:16:24.915974 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 20:16:24.915981 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 20:16:24.915988 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 20:16:24.915995 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 20:16:24.916002 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 20:16:24.916009 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 20:16:24.916016 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:16:24.916024 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:16:24.916031 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:16:24.916039 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:16:24.916046 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:16:24.916053 kernel: arm-pv: using stolen time PV Feb 13 20:16:24.916060 kernel: Console: colour dummy device 80x25 Feb 13 20:16:24.916067 kernel: ACPI: Core revision 20230628 Feb 13 20:16:24.916074 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:16:24.916082 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:16:24.916089 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:16:24.916097 kernel: landlock: Up and running. Feb 13 20:16:24.916104 kernel: SELinux: Initializing. Feb 13 20:16:24.916112 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:16:24.916119 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:16:24.916126 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:16:24.916133 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:16:24.916141 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:16:24.916148 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:16:24.916155 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 20:16:24.916163 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 20:16:24.916170 kernel: Remapping and enabling EFI services. Feb 13 20:16:24.916177 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:16:24.916184 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:16:24.916192 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 20:16:24.916199 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 20:16:24.916206 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:16:24.916213 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:16:24.916221 kernel: Detected PIPT I-cache on CPU2 Feb 13 20:16:24.916228 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 20:16:24.916237 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 20:16:24.916244 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:16:24.916255 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 20:16:24.916264 kernel: Detected PIPT I-cache on CPU3 Feb 13 20:16:24.916273 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 20:16:24.916282 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 20:16:24.916289 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:16:24.916297 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 20:16:24.916304 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 20:16:24.916318 kernel: SMP: Total of 4 processors activated. Feb 13 20:16:24.916327 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:16:24.916334 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:16:24.916342 kernel: CPU features: detected: Common not Private translations Feb 13 20:16:24.916350 kernel: CPU features: detected: CRC32 instructions Feb 13 20:16:24.916357 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 20:16:24.916365 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:16:24.916372 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:16:24.916381 kernel: CPU features: detected: Privileged Access Never Feb 13 20:16:24.916389 kernel: CPU features: detected: RAS Extension Support Feb 13 20:16:24.916396 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 20:16:24.916404 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:16:24.916411 kernel: alternatives: applying system-wide alternatives Feb 13 20:16:24.916418 kernel: devtmpfs: initialized Feb 13 20:16:24.916426 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:16:24.916434 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 20:16:24.916441 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:16:24.916451 kernel: SMBIOS 3.0.0 present. Feb 13 20:16:24.916458 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 20:16:24.916466 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:16:24.916473 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:16:24.916481 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:16:24.916489 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:16:24.916496 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:16:24.916504 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Feb 13 20:16:24.916511 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:16:24.916520 kernel: cpuidle: using governor menu Feb 13 20:16:24.916528 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:16:24.916535 kernel: ASID allocator initialised with 32768 entries Feb 13 20:16:24.916543 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:16:24.916550 kernel: Serial: AMBA PL011 UART driver Feb 13 20:16:24.916557 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:16:24.916565 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:16:24.916572 kernel: Modules: 509040 pages in range for PLT usage Feb 13 20:16:24.916580 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:16:24.916589 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:16:24.916596 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:16:24.916604 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:16:24.916611 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:16:24.916619 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:16:24.916626 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:16:24.916634 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:16:24.916641 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:16:24.916648 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:16:24.916657 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:16:24.916665 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:16:24.916672 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:16:24.916679 kernel: ACPI: Interpreter enabled Feb 13 20:16:24.916687 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:16:24.916694 kernel: ACPI: MCFG table detected, 1 entries Feb 13 20:16:24.916702 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:16:24.916713 kernel: printk: console [ttyAMA0] enabled Feb 13 20:16:24.916721 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:16:24.916879 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:16:24.916959 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 20:16:24.917025 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 20:16:24.917089 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 20:16:24.917153 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 20:16:24.917163 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 20:16:24.917171 kernel: PCI host bridge to bus 0000:00 Feb 13 20:16:24.917246 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 20:16:24.917306 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 20:16:24.917377 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 20:16:24.917437 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:16:24.917517 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 20:16:24.917597 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:16:24.917680 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 20:16:24.917763 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 20:16:24.917832 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:16:24.917899 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:16:24.917981 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 20:16:24.918049 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 20:16:24.918109 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 20:16:24.918170 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 20:16:24.918227 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 20:16:24.918237 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 20:16:24.918245 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 20:16:24.918252 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 20:16:24.918259 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 20:16:24.918267 kernel: iommu: Default domain type: Translated Feb 13 20:16:24.918274 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:16:24.918281 kernel: efivars: Registered efivars operations Feb 13 20:16:24.918290 kernel: vgaarb: loaded Feb 13 20:16:24.918297 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:16:24.918305 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:16:24.918319 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:16:24.918327 kernel: pnp: PnP ACPI init Feb 13 20:16:24.918401 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 20:16:24.918412 kernel: pnp: PnP ACPI: found 1 devices Feb 13 20:16:24.918420 kernel: NET: Registered PF_INET protocol family Feb 13 20:16:24.918429 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:16:24.918437 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:16:24.918444 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:16:24.918452 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:16:24.918459 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:16:24.918467 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:16:24.918474 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:16:24.918481 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:16:24.918489 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:16:24.918498 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:16:24.918505 kernel: kvm [1]: HYP mode not available Feb 13 20:16:24.918512 kernel: Initialise system trusted keyrings Feb 13 20:16:24.918520 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:16:24.918527 kernel: Key type asymmetric registered Feb 13 20:16:24.918534 kernel: Asymmetric key parser 'x509' registered Feb 13 20:16:24.918541 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:16:24.918549 kernel: io scheduler mq-deadline registered Feb 13 20:16:24.918556 kernel: io scheduler kyber registered Feb 13 20:16:24.918564 kernel: io scheduler bfq registered Feb 13 20:16:24.918572 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 20:16:24.918579 kernel: ACPI: button: Power Button [PWRB] Feb 13 20:16:24.918587 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 20:16:24.918653 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 20:16:24.918663 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:16:24.918675 kernel: thunder_xcv, ver 1.0 Feb 13 20:16:24.918682 kernel: thunder_bgx, ver 1.0 Feb 13 20:16:24.918690 kernel: nicpf, ver 1.0 Feb 13 20:16:24.918699 kernel: nicvf, ver 1.0 Feb 13 20:16:24.918866 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:16:24.918947 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:16:24 UTC (1739477784) Feb 13 20:16:24.918958 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:16:24.918965 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 20:16:24.918973 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:16:24.918980 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:16:24.918988 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:16:24.918999 kernel: Segment Routing with IPv6 Feb 13 20:16:24.919006 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:16:24.919013 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:16:24.919020 kernel: Key type dns_resolver registered Feb 13 20:16:24.919027 kernel: registered taskstats version 1 Feb 13 20:16:24.919035 kernel: Loading compiled-in X.509 certificates Feb 13 20:16:24.919042 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 20:16:24.919049 kernel: Key type .fscrypt registered Feb 13 20:16:24.919056 kernel: Key type fscrypt-provisioning registered Feb 13 20:16:24.919066 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:16:24.919073 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:16:24.919080 kernel: ima: No architecture policies found Feb 13 20:16:24.919087 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:16:24.919094 kernel: clk: Disabling unused clocks Feb 13 20:16:24.919102 kernel: Freeing unused kernel memory: 39360K Feb 13 20:16:24.919109 kernel: Run /init as init process Feb 13 20:16:24.919116 kernel: with arguments: Feb 13 20:16:24.919123 kernel: /init Feb 13 20:16:24.919132 kernel: with environment: Feb 13 20:16:24.919139 kernel: HOME=/ Feb 13 20:16:24.919146 kernel: TERM=linux Feb 13 20:16:24.919153 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:16:24.919162 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:16:24.919186 systemd[1]: Detected virtualization kvm. Feb 13 20:16:24.919194 systemd[1]: Detected architecture arm64. Feb 13 20:16:24.919201 systemd[1]: Running in initrd. Feb 13 20:16:24.919210 systemd[1]: No hostname configured, using default hostname. Feb 13 20:16:24.919218 systemd[1]: Hostname set to . Feb 13 20:16:24.919226 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:16:24.919234 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:16:24.919242 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:16:24.919250 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:16:24.919258 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:16:24.919266 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:16:24.919275 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:16:24.919283 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:16:24.919292 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:16:24.919301 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:16:24.919309 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:16:24.919325 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:16:24.919336 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:16:24.919343 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:16:24.919351 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:16:24.919359 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:16:24.919367 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:16:24.919374 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:16:24.919382 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:16:24.919390 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:16:24.919398 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:16:24.919407 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:16:24.919415 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:16:24.919423 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:16:24.919431 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:16:24.919438 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:16:24.919446 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:16:24.919454 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:16:24.919461 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:16:24.919469 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:16:24.919478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:24.919486 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:16:24.919494 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:16:24.919502 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:16:24.919529 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 20:16:24.919549 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:16:24.919558 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:24.919566 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:16:24.919575 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:16:24.919584 systemd-journald[239]: Journal started Feb 13 20:16:24.919602 systemd-journald[239]: Runtime Journal (/run/log/journal/7b203b0716ec4aae8568b9502adc2695) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:16:24.901648 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 20:16:24.921921 kernel: Bridge firewalling registered Feb 13 20:16:24.921939 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:16:24.920748 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 20:16:24.923007 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:16:24.924492 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:16:24.927641 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:16:24.929137 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:16:24.932844 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:16:24.939209 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:16:24.944258 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:16:24.945620 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:16:24.948771 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:24.959851 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:16:24.961924 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:16:24.970394 dracut-cmdline[277]: dracut-dracut-053 Feb 13 20:16:24.972891 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:16:24.987253 systemd-resolved[279]: Positive Trust Anchors: Feb 13 20:16:24.987270 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:16:24.987302 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:16:24.991949 systemd-resolved[279]: Defaulting to hostname 'linux'. Feb 13 20:16:24.995191 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:16:24.996057 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:16:25.041740 kernel: SCSI subsystem initialized Feb 13 20:16:25.047729 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:16:25.056761 kernel: iscsi: registered transport (tcp) Feb 13 20:16:25.072779 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:16:25.072816 kernel: QLogic iSCSI HBA Driver Feb 13 20:16:25.122291 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:16:25.131904 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:16:25.148164 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:16:25.148207 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:16:25.148959 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:16:25.195725 kernel: raid6: neonx8 gen() 15742 MB/s Feb 13 20:16:25.212718 kernel: raid6: neonx4 gen() 15621 MB/s Feb 13 20:16:25.229718 kernel: raid6: neonx2 gen() 13139 MB/s Feb 13 20:16:25.246725 kernel: raid6: neonx1 gen() 10460 MB/s Feb 13 20:16:25.263720 kernel: raid6: int64x8 gen() 6912 MB/s Feb 13 20:16:25.280722 kernel: raid6: int64x4 gen() 7291 MB/s Feb 13 20:16:25.297730 kernel: raid6: int64x2 gen() 6115 MB/s Feb 13 20:16:25.314720 kernel: raid6: int64x1 gen() 5041 MB/s Feb 13 20:16:25.314744 kernel: raid6: using algorithm neonx8 gen() 15742 MB/s Feb 13 20:16:25.331719 kernel: raid6: .... xor() 11886 MB/s, rmw enabled Feb 13 20:16:25.331736 kernel: raid6: using neon recovery algorithm Feb 13 20:16:25.337036 kernel: xor: measuring software checksum speed Feb 13 20:16:25.337054 kernel: 8regs : 19788 MB/sec Feb 13 20:16:25.337063 kernel: 32regs : 19664 MB/sec Feb 13 20:16:25.337978 kernel: arm64_neon : 26795 MB/sec Feb 13 20:16:25.338001 kernel: xor: using function: arm64_neon (26795 MB/sec) Feb 13 20:16:25.387733 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:16:25.397864 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:16:25.415905 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:16:25.427045 systemd-udevd[462]: Using default interface naming scheme 'v255'. Feb 13 20:16:25.430155 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:16:25.433510 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:16:25.446859 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Feb 13 20:16:25.471520 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:16:25.489867 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:16:25.528446 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:16:25.536846 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:16:25.547288 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:16:25.548657 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:16:25.549944 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:16:25.551630 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:16:25.559845 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:16:25.574081 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 20:16:25.582163 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 20:16:25.582257 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:16:25.582269 kernel: GPT:9289727 != 19775487 Feb 13 20:16:25.582279 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:16:25.582288 kernel: GPT:9289727 != 19775487 Feb 13 20:16:25.582297 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:16:25.582306 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:16:25.576118 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:16:25.576220 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:25.579829 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:16:25.582651 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:16:25.582787 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:25.583626 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:25.592017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:25.593389 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:16:25.598756 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (506) Feb 13 20:16:25.600752 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (523) Feb 13 20:16:25.605868 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:25.615107 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:16:25.619354 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:16:25.622868 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:16:25.623719 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:16:25.628718 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:16:25.638834 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:16:25.640272 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:16:25.645431 disk-uuid[551]: Primary Header is updated. Feb 13 20:16:25.645431 disk-uuid[551]: Secondary Entries is updated. Feb 13 20:16:25.645431 disk-uuid[551]: Secondary Header is updated. Feb 13 20:16:25.647991 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:16:25.665172 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:26.662733 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:16:26.662788 disk-uuid[552]: The operation has completed successfully. Feb 13 20:16:26.682539 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:16:26.683440 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:16:26.705898 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:16:26.708574 sh[573]: Success Feb 13 20:16:26.719775 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:16:26.747655 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:16:26.761855 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:16:26.763690 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:16:26.773448 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 20:16:26.773494 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:16:26.773515 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:16:26.773535 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:16:26.774717 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:16:26.777539 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:16:26.778591 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:16:26.792902 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:16:26.794149 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:16:26.800552 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:16:26.800589 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:16:26.800600 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:16:26.802736 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:16:26.810025 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:16:26.810781 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:16:26.816139 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:16:26.823854 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:16:26.885570 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:16:26.897914 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:16:26.923650 systemd-networkd[762]: lo: Link UP Feb 13 20:16:26.923659 systemd-networkd[762]: lo: Gained carrier Feb 13 20:16:26.924313 systemd-networkd[762]: Enumeration completed Feb 13 20:16:26.924410 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:16:26.924785 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:16:26.924787 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:16:26.928141 ignition[664]: Ignition 2.19.0 Feb 13 20:16:26.925602 systemd[1]: Reached target network.target - Network. Feb 13 20:16:26.928147 ignition[664]: Stage: fetch-offline Feb 13 20:16:26.925607 systemd-networkd[762]: eth0: Link UP Feb 13 20:16:26.928182 ignition[664]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:26.925610 systemd-networkd[762]: eth0: Gained carrier Feb 13 20:16:26.928190 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:26.925616 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:16:26.928350 ignition[664]: parsed url from cmdline: "" Feb 13 20:16:26.928353 ignition[664]: no config URL provided Feb 13 20:16:26.928357 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:16:26.928364 ignition[664]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:16:26.928386 ignition[664]: op(1): [started] loading QEMU firmware config module Feb 13 20:16:26.928392 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 20:16:26.935520 ignition[664]: op(1): [finished] loading QEMU firmware config module Feb 13 20:16:26.954760 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:16:26.959750 ignition[664]: parsing config with SHA512: 1eca5e7299eab6574a237bbf013edce607999fa3859e6bad02ce65c95f927041b4df43c5e5d28d715b5e96ab8ff38b60b3149db411d8b22117256501cfeeae9a Feb 13 20:16:26.963660 unknown[664]: fetched base config from "system" Feb 13 20:16:26.963671 unknown[664]: fetched user config from "qemu" Feb 13 20:16:26.964167 ignition[664]: fetch-offline: fetch-offline passed Feb 13 20:16:26.964232 ignition[664]: Ignition finished successfully Feb 13 20:16:26.965846 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:16:26.967185 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:16:26.979857 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:16:26.989810 ignition[773]: Ignition 2.19.0 Feb 13 20:16:26.989819 ignition[773]: Stage: kargs Feb 13 20:16:26.989965 ignition[773]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:26.989974 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:26.990802 ignition[773]: kargs: kargs passed Feb 13 20:16:26.992648 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:16:26.990841 ignition[773]: Ignition finished successfully Feb 13 20:16:26.994254 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:16:27.006598 ignition[781]: Ignition 2.19.0 Feb 13 20:16:27.006607 ignition[781]: Stage: disks Feb 13 20:16:27.006782 ignition[781]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:27.006791 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:27.009758 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:16:27.007603 ignition[781]: disks: disks passed Feb 13 20:16:27.010601 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:16:27.007643 ignition[781]: Ignition finished successfully Feb 13 20:16:27.011953 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:16:27.013166 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:16:27.014472 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:16:27.015622 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:16:27.026871 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:16:27.035764 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:16:27.039640 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:16:27.042732 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:16:27.086733 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 20:16:27.086748 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:16:27.087672 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:16:27.099790 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:16:27.101126 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:16:27.102124 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:16:27.102191 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:16:27.102217 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:16:27.107805 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:16:27.109428 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Feb 13 20:16:27.109446 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:16:27.109402 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:16:27.112991 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:16:27.113008 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:16:27.114730 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:16:27.115262 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:16:27.149877 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:16:27.153659 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:16:27.157445 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:16:27.160866 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:16:27.226587 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:16:27.235830 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:16:27.238001 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:16:27.242723 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:16:27.255734 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:16:27.257545 ignition[914]: INFO : Ignition 2.19.0 Feb 13 20:16:27.257545 ignition[914]: INFO : Stage: mount Feb 13 20:16:27.258670 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:27.258670 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:27.260188 ignition[914]: INFO : mount: mount passed Feb 13 20:16:27.260188 ignition[914]: INFO : Ignition finished successfully Feb 13 20:16:27.260300 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:16:27.268833 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:16:27.772648 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:16:27.788882 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:16:27.794386 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Feb 13 20:16:27.794412 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:16:27.794423 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:16:27.795101 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:16:27.797735 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:16:27.798441 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:16:27.813275 ignition[943]: INFO : Ignition 2.19.0 Feb 13 20:16:27.813275 ignition[943]: INFO : Stage: files Feb 13 20:16:27.814465 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:27.814465 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:27.814465 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:16:27.816981 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:16:27.816981 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:16:27.816981 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:16:27.816981 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:16:27.820822 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:16:27.820822 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 20:16:27.820822 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 13 20:16:27.817038 unknown[943]: wrote ssh authorized keys file for user: core Feb 13 20:16:27.950502 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:16:28.106429 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 20:16:28.106429 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:16:28.109346 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:16:28.109346 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:16:28.109346 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:16:28.109346 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:16:28.109346 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:16:28.109346 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:16:28.109346 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:16:28.109346 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:16:28.109346 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:16:28.109346 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:16:28.109346 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:16:28.109346 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:16:28.109346 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 20:16:28.324569 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:16:28.567276 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:16:28.567276 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:16:28.569946 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:16:28.569946 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:16:28.569946 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:16:28.569946 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 20:16:28.569946 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:16:28.569946 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:16:28.569946 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 20:16:28.569946 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 20:16:28.597020 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:16:28.600823 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:16:28.601941 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 20:16:28.601941 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:16:28.601941 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:16:28.601941 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:16:28.601941 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:16:28.601941 ignition[943]: INFO : files: files passed Feb 13 20:16:28.601941 ignition[943]: INFO : Ignition finished successfully Feb 13 20:16:28.603404 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:16:28.624906 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:16:28.626875 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:16:28.629238 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:16:28.629910 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:16:28.633284 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 20:16:28.636700 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:28.636700 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:28.638822 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:28.639688 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:16:28.640860 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:16:28.651858 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:16:28.669993 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:16:28.670094 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:16:28.671746 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:16:28.674506 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:16:28.675291 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:16:28.677770 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:16:28.690829 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:16:28.700892 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:16:28.708588 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:16:28.709553 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:16:28.711058 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:16:28.712359 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:16:28.712471 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:16:28.714318 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:16:28.715786 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:16:28.717009 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:16:28.718252 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:16:28.719616 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:16:28.721059 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:16:28.722397 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:16:28.723786 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:16:28.725254 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:16:28.726481 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:16:28.727577 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:16:28.727690 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:16:28.729396 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:16:28.730784 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:16:28.732207 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:16:28.733605 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:16:28.734553 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:16:28.734660 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:16:28.736838 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:16:28.736950 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:16:28.738390 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:16:28.739486 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:16:28.745737 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:16:28.746670 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:16:28.748226 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:16:28.749428 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:16:28.749517 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:16:28.750651 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:16:28.750748 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:16:28.751896 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:16:28.752000 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:16:28.753326 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:16:28.753422 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:16:28.766887 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:16:28.767547 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:16:28.767666 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:16:28.769876 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:16:28.771009 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:16:28.771126 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:16:28.777192 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:16:28.779873 ignition[998]: INFO : Ignition 2.19.0 Feb 13 20:16:28.779873 ignition[998]: INFO : Stage: umount Feb 13 20:16:28.779873 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:28.779873 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:28.779873 ignition[998]: INFO : umount: umount passed Feb 13 20:16:28.779873 ignition[998]: INFO : Ignition finished successfully Feb 13 20:16:28.777289 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:16:28.783206 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:16:28.784151 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:16:28.786250 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:16:28.786761 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:16:28.786839 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:16:28.789290 systemd[1]: Stopped target network.target - Network. Feb 13 20:16:28.790022 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:16:28.790092 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:16:28.790977 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:16:28.791018 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:16:28.792218 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:16:28.792254 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:16:28.793670 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:16:28.793722 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:16:28.795312 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:16:28.796498 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:16:28.801752 systemd-networkd[762]: eth0: DHCPv6 lease lost Feb 13 20:16:28.803553 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:16:28.803659 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:16:28.804855 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:16:28.804883 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:16:28.817839 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:16:28.818489 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:16:28.818546 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:16:28.821642 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:16:28.823588 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:16:28.823682 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:16:28.827228 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:16:28.827291 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:16:28.829018 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:16:28.829064 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:16:28.830656 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:16:28.830699 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:16:28.833646 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:16:28.834243 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:16:28.835867 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:16:28.835949 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:16:28.838204 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:16:28.838256 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:16:28.839865 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:16:28.839900 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:16:28.842091 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:16:28.842144 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:16:28.844414 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:16:28.844459 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:16:28.846744 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:16:28.846789 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:28.859841 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:16:28.860862 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:16:28.860918 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:16:28.862784 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:16:28.862828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:28.864759 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:16:28.866732 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:16:28.868444 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:16:28.868524 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:16:28.870537 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:16:28.872293 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:16:28.872367 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:16:28.881928 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:16:28.887677 systemd[1]: Switching root. Feb 13 20:16:28.922632 systemd-journald[239]: Journal stopped Feb 13 20:16:29.604909 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 20:16:29.604962 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:16:29.604976 kernel: SELinux: policy capability open_perms=1 Feb 13 20:16:29.604985 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:16:29.604996 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:16:29.605005 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:16:29.605018 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:16:29.605030 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:16:29.605039 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:16:29.605052 kernel: audit: type=1403 audit(1739477789.068:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:16:29.605063 systemd[1]: Successfully loaded SELinux policy in 29.791ms. Feb 13 20:16:29.605079 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.325ms. Feb 13 20:16:29.605091 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:16:29.605103 systemd[1]: Detected virtualization kvm. Feb 13 20:16:29.605113 systemd[1]: Detected architecture arm64. Feb 13 20:16:29.605123 systemd[1]: Detected first boot. Feb 13 20:16:29.605136 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:16:29.605146 zram_generator::config[1044]: No configuration found. Feb 13 20:16:29.605157 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:16:29.605167 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:16:29.605182 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:16:29.605192 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:16:29.605204 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:16:29.605214 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:16:29.605227 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:16:29.605237 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:16:29.605248 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:16:29.605258 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:16:29.605269 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:16:29.605279 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:16:29.605289 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:16:29.605309 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:16:29.605322 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:16:29.605335 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:16:29.605347 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:16:29.605358 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:16:29.605369 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 20:16:29.605380 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:16:29.605390 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:16:29.605401 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:16:29.605411 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:16:29.605423 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:16:29.605434 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:16:29.605445 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:16:29.605455 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:16:29.605466 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:16:29.605477 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:16:29.605487 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:16:29.605498 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:16:29.605509 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:16:29.605522 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:16:29.605533 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:16:29.605544 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:16:29.605554 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:16:29.605565 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:16:29.605575 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:16:29.605586 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:16:29.605597 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:16:29.605610 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:16:29.605621 systemd[1]: Reached target machines.target - Containers. Feb 13 20:16:29.605631 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:16:29.605642 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:29.605653 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:16:29.605664 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:16:29.605674 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:16:29.605685 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:16:29.605696 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:16:29.605716 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:16:29.605730 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:16:29.605741 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:16:29.605753 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:16:29.605767 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:16:29.605778 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:16:29.605789 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:16:29.605800 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:16:29.605813 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:16:29.605824 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:16:29.605835 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:16:29.605846 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:16:29.605856 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:16:29.605866 systemd[1]: Stopped verity-setup.service. Feb 13 20:16:29.605877 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:16:29.605887 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:16:29.605898 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:16:29.605910 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:16:29.605921 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:16:29.605931 kernel: loop: module loaded Feb 13 20:16:29.605941 kernel: fuse: init (API version 7.39) Feb 13 20:16:29.605951 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:16:29.605963 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:16:29.605976 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:16:29.605987 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:16:29.605997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:16:29.606008 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:16:29.606018 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:16:29.606029 kernel: ACPI: bus type drm_connector registered Feb 13 20:16:29.606038 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:16:29.606049 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:16:29.606079 systemd-journald[1104]: Collecting audit messages is disabled. Feb 13 20:16:29.606100 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:16:29.606111 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:16:29.606122 systemd-journald[1104]: Journal started Feb 13 20:16:29.606142 systemd-journald[1104]: Runtime Journal (/run/log/journal/7b203b0716ec4aae8568b9502adc2695) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:16:29.421093 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:16:29.439740 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:16:29.440113 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:16:29.606983 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:16:29.609175 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:16:29.609974 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:16:29.610114 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:16:29.611197 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:16:29.612243 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:16:29.613383 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:16:29.623935 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:16:29.629821 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:16:29.635483 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:16:29.636496 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:16:29.636528 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:16:29.638226 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:16:29.659911 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:16:29.661891 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:16:29.662695 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:29.664126 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:16:29.666360 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:16:29.667211 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:16:29.672864 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:16:29.673969 systemd-journald[1104]: Time spent on flushing to /var/log/journal/7b203b0716ec4aae8568b9502adc2695 is 30.002ms for 848 entries. Feb 13 20:16:29.673969 systemd-journald[1104]: System Journal (/var/log/journal/7b203b0716ec4aae8568b9502adc2695) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:16:29.720648 systemd-journald[1104]: Received client request to flush runtime journal. Feb 13 20:16:29.720695 kernel: loop0: detected capacity change from 0 to 114328 Feb 13 20:16:29.720745 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:16:29.674072 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:16:29.675702 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:16:29.678003 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:16:29.680559 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:16:29.683221 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:16:29.686550 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:16:29.689361 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:16:29.690731 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:16:29.692008 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:16:29.697067 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:16:29.704884 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:16:29.709984 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:16:29.712666 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:16:29.715342 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:16:29.721701 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:16:29.735314 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:16:29.748025 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:16:29.748632 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:16:29.757790 kernel: loop1: detected capacity change from 0 to 201592 Feb 13 20:16:29.760244 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:16:29.769894 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:16:29.781736 kernel: loop2: detected capacity change from 0 to 114432 Feb 13 20:16:29.788630 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 20:16:29.788646 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 20:16:29.793674 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:16:29.816729 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 20:16:29.821747 kernel: loop4: detected capacity change from 0 to 201592 Feb 13 20:16:29.826726 kernel: loop5: detected capacity change from 0 to 114432 Feb 13 20:16:29.830487 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 20:16:29.830883 (sd-merge)[1180]: Merged extensions into '/usr'. Feb 13 20:16:29.835848 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:16:29.835865 systemd[1]: Reloading... Feb 13 20:16:29.882813 zram_generator::config[1205]: No configuration found. Feb 13 20:16:29.964782 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:16:29.976527 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:16:30.012364 systemd[1]: Reloading finished in 176 ms. Feb 13 20:16:30.038610 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:16:30.039801 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:16:30.051943 systemd[1]: Starting ensure-sysext.service... Feb 13 20:16:30.053604 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:16:30.064143 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:16:30.064168 systemd[1]: Reloading... Feb 13 20:16:30.078523 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:16:30.078795 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:16:30.079437 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:16:30.079644 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Feb 13 20:16:30.079694 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Feb 13 20:16:30.082315 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:16:30.082412 systemd-tmpfiles[1241]: Skipping /boot Feb 13 20:16:30.090502 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:16:30.091919 systemd-tmpfiles[1241]: Skipping /boot Feb 13 20:16:30.104735 zram_generator::config[1269]: No configuration found. Feb 13 20:16:30.193215 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:16:30.228766 systemd[1]: Reloading finished in 164 ms. Feb 13 20:16:30.245749 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:16:30.254122 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:16:30.261379 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:16:30.263654 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:16:30.265602 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:16:30.269037 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:16:30.274904 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:16:30.276791 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:16:30.279574 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:30.280975 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:16:30.286864 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:16:30.293102 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:16:30.295375 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:30.296127 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:16:30.296574 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:16:30.298494 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:16:30.299464 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:16:30.301174 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:16:30.302877 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:16:30.303081 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:16:30.304098 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Feb 13 20:16:30.311836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:30.323028 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:16:30.324952 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:16:30.327068 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:16:30.328483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:30.329622 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:16:30.333989 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:16:30.337376 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:16:30.338889 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:16:30.340368 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:16:30.341804 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:16:30.341937 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:16:30.354637 systemd[1]: Finished ensure-sysext.service. Feb 13 20:16:30.358189 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:16:30.358359 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:16:30.361211 augenrules[1357]: No rules Feb 13 20:16:30.360099 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:16:30.360220 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:16:30.362146 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:16:30.364131 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:16:30.372515 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 20:16:30.373724 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:30.382959 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:16:30.383734 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1348) Feb 13 20:16:30.385994 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:16:30.388918 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:30.391765 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:16:30.392813 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:16:30.403871 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:16:30.404698 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:16:30.405103 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:16:30.405253 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:16:30.406348 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:16:30.406468 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:16:30.407410 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:16:30.418099 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:16:30.432426 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:16:30.440917 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:16:30.473688 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:16:30.475246 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:16:30.476575 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:16:30.480632 systemd-networkd[1375]: lo: Link UP Feb 13 20:16:30.480638 systemd-networkd[1375]: lo: Gained carrier Feb 13 20:16:30.481309 systemd-networkd[1375]: Enumeration completed Feb 13 20:16:30.481420 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:16:30.483091 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:16:30.483101 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:16:30.483774 systemd-networkd[1375]: eth0: Link UP Feb 13 20:16:30.483781 systemd-networkd[1375]: eth0: Gained carrier Feb 13 20:16:30.483793 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:16:30.488914 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:16:30.494391 systemd-resolved[1308]: Positive Trust Anchors: Feb 13 20:16:30.494615 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:16:30.494692 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:16:30.495678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:30.496814 systemd-networkd[1375]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:16:30.497345 systemd-timesyncd[1376]: Network configuration changed, trying to establish connection. Feb 13 20:16:30.498331 systemd-timesyncd[1376]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 20:16:30.498380 systemd-timesyncd[1376]: Initial clock synchronization to Thu 2025-02-13 20:16:30.510454 UTC. Feb 13 20:16:30.500789 systemd-resolved[1308]: Defaulting to hostname 'linux'. Feb 13 20:16:30.502269 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:16:30.503166 systemd[1]: Reached target network.target - Network. Feb 13 20:16:30.503857 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:16:30.516262 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:16:30.525955 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:16:30.543754 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:30.553008 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:16:30.585801 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:16:30.587261 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:16:30.589869 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:16:30.590702 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:16:30.591607 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:16:30.592847 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:16:30.593700 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:16:30.594594 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:16:30.595510 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:16:30.595543 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:16:30.596210 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:16:30.597644 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:16:30.599800 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:16:30.613724 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:16:30.615699 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:16:30.617026 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:16:30.617908 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:16:30.618574 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:16:30.619325 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:16:30.619362 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:16:30.620271 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:16:30.621991 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:16:30.623657 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:16:30.624876 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:16:30.628875 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:16:30.629921 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:16:30.631885 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:16:30.634496 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:16:30.637939 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:16:30.640498 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:16:30.647894 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:16:30.650228 jq[1412]: false Feb 13 20:16:30.652873 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:16:30.653348 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:16:30.654454 extend-filesystems[1413]: Found loop3 Feb 13 20:16:30.654929 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:16:30.655504 extend-filesystems[1413]: Found loop4 Feb 13 20:16:30.656687 extend-filesystems[1413]: Found loop5 Feb 13 20:16:30.656687 extend-filesystems[1413]: Found vda Feb 13 20:16:30.656687 extend-filesystems[1413]: Found vda1 Feb 13 20:16:30.656687 extend-filesystems[1413]: Found vda2 Feb 13 20:16:30.656687 extend-filesystems[1413]: Found vda3 Feb 13 20:16:30.656687 extend-filesystems[1413]: Found usr Feb 13 20:16:30.656687 extend-filesystems[1413]: Found vda4 Feb 13 20:16:30.656687 extend-filesystems[1413]: Found vda6 Feb 13 20:16:30.656687 extend-filesystems[1413]: Found vda7 Feb 13 20:16:30.656687 extend-filesystems[1413]: Found vda9 Feb 13 20:16:30.656687 extend-filesystems[1413]: Checking size of /dev/vda9 Feb 13 20:16:30.658531 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:16:30.660696 dbus-daemon[1411]: [system] SELinux support is enabled Feb 13 20:16:30.662410 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:16:30.665735 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:16:30.671127 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:16:30.671278 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:16:30.671534 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:16:30.671661 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:16:30.674424 jq[1430]: true Feb 13 20:16:30.675309 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:16:30.675502 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:16:30.679705 extend-filesystems[1413]: Resized partition /dev/vda9 Feb 13 20:16:30.682253 extend-filesystems[1437]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:16:30.686853 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 20:16:30.701156 (ntainerd)[1438]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:16:30.702255 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1338) Feb 13 20:16:30.714639 systemd-logind[1421]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 20:16:30.715125 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:16:30.715158 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:16:30.715501 systemd-logind[1421]: New seat seat0. Feb 13 20:16:30.717022 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:16:30.717039 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:16:30.723073 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 20:16:30.719368 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:16:30.736306 tar[1433]: linux-arm64/LICENSE Feb 13 20:16:30.736542 jq[1436]: true Feb 13 20:16:30.736884 tar[1433]: linux-arm64/helm Feb 13 20:16:30.737287 extend-filesystems[1437]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:16:30.737287 extend-filesystems[1437]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:16:30.737287 extend-filesystems[1437]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 20:16:30.740304 extend-filesystems[1413]: Resized filesystem in /dev/vda9 Feb 13 20:16:30.740447 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:16:30.740640 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:16:30.751778 update_engine[1428]: I20250213 20:16:30.751560 1428 main.cc:92] Flatcar Update Engine starting Feb 13 20:16:30.753781 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:16:30.754009 update_engine[1428]: I20250213 20:16:30.753808 1428 update_check_scheduler.cc:74] Next update check in 6m50s Feb 13 20:16:30.762007 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:16:30.813702 bash[1466]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:16:30.815530 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:16:30.818318 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:16:30.828952 locksmithd[1463]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:16:30.915288 containerd[1438]: time="2025-02-13T20:16:30.915089680Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:16:30.946043 containerd[1438]: time="2025-02-13T20:16:30.945994520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:30.948438 containerd[1438]: time="2025-02-13T20:16:30.947393760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:30.948438 containerd[1438]: time="2025-02-13T20:16:30.947425400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:16:30.948438 containerd[1438]: time="2025-02-13T20:16:30.947442600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:16:30.948438 containerd[1438]: time="2025-02-13T20:16:30.947594040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:16:30.948438 containerd[1438]: time="2025-02-13T20:16:30.947611120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:30.948438 containerd[1438]: time="2025-02-13T20:16:30.947659360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:30.948438 containerd[1438]: time="2025-02-13T20:16:30.947670880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:30.948438 containerd[1438]: time="2025-02-13T20:16:30.947836120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:30.948438 containerd[1438]: time="2025-02-13T20:16:30.947851560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:30.948438 containerd[1438]: time="2025-02-13T20:16:30.947863680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:30.948438 containerd[1438]: time="2025-02-13T20:16:30.947872800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:30.948668 containerd[1438]: time="2025-02-13T20:16:30.947958280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:30.948668 containerd[1438]: time="2025-02-13T20:16:30.948145440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:16:30.948668 containerd[1438]: time="2025-02-13T20:16:30.948237360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:16:30.948668 containerd[1438]: time="2025-02-13T20:16:30.948250920Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:16:30.948668 containerd[1438]: time="2025-02-13T20:16:30.948338400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:16:30.948668 containerd[1438]: time="2025-02-13T20:16:30.948377120Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:16:30.952995 containerd[1438]: time="2025-02-13T20:16:30.952956960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:16:30.953060 containerd[1438]: time="2025-02-13T20:16:30.953017520Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:16:30.953060 containerd[1438]: time="2025-02-13T20:16:30.953034920Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:16:30.953060 containerd[1438]: time="2025-02-13T20:16:30.953049600Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:16:30.953127 containerd[1438]: time="2025-02-13T20:16:30.953063400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:16:30.953231 containerd[1438]: time="2025-02-13T20:16:30.953210400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:16:30.954187 containerd[1438]: time="2025-02-13T20:16:30.953479360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:16:30.954187 containerd[1438]: time="2025-02-13T20:16:30.953618600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:16:30.954187 containerd[1438]: time="2025-02-13T20:16:30.953635160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:16:30.954187 containerd[1438]: time="2025-02-13T20:16:30.953646680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:16:30.954187 containerd[1438]: time="2025-02-13T20:16:30.953659600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:16:30.954187 containerd[1438]: time="2025-02-13T20:16:30.953671560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:16:30.954187 containerd[1438]: time="2025-02-13T20:16:30.953683240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:16:30.954187 containerd[1438]: time="2025-02-13T20:16:30.953695560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:16:30.954187 containerd[1438]: time="2025-02-13T20:16:30.953725440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:16:30.954187 containerd[1438]: time="2025-02-13T20:16:30.953738560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:16:30.954187 containerd[1438]: time="2025-02-13T20:16:30.953749960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:16:30.954187 containerd[1438]: time="2025-02-13T20:16:30.953760560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:16:30.954187 containerd[1438]: time="2025-02-13T20:16:30.953779760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:16:30.954187 containerd[1438]: time="2025-02-13T20:16:30.953792840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:16:30.954470 containerd[1438]: time="2025-02-13T20:16:30.953804160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:16:30.954470 containerd[1438]: time="2025-02-13T20:16:30.953815800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:16:30.954470 containerd[1438]: time="2025-02-13T20:16:30.953829720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:16:30.954470 containerd[1438]: time="2025-02-13T20:16:30.953844520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:16:30.954470 containerd[1438]: time="2025-02-13T20:16:30.953856200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:16:30.954470 containerd[1438]: time="2025-02-13T20:16:30.953868080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:16:30.954470 containerd[1438]: time="2025-02-13T20:16:30.953880640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:16:30.954470 containerd[1438]: time="2025-02-13T20:16:30.953894640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:16:30.954470 containerd[1438]: time="2025-02-13T20:16:30.953905480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:16:30.954470 containerd[1438]: time="2025-02-13T20:16:30.953916640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:16:30.954470 containerd[1438]: time="2025-02-13T20:16:30.953928160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:16:30.954470 containerd[1438]: time="2025-02-13T20:16:30.953948040Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:16:30.954470 containerd[1438]: time="2025-02-13T20:16:30.953967400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:16:30.954470 containerd[1438]: time="2025-02-13T20:16:30.953979200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:16:30.954470 containerd[1438]: time="2025-02-13T20:16:30.953992160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:16:30.954726 containerd[1438]: time="2025-02-13T20:16:30.954101920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:16:30.954726 containerd[1438]: time="2025-02-13T20:16:30.954117640Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:16:30.954726 containerd[1438]: time="2025-02-13T20:16:30.954131080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:16:30.954726 containerd[1438]: time="2025-02-13T20:16:30.954142440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:16:30.954726 containerd[1438]: time="2025-02-13T20:16:30.954152400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:16:30.954840 containerd[1438]: time="2025-02-13T20:16:30.954167360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:16:30.954891 containerd[1438]: time="2025-02-13T20:16:30.954877760Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:16:30.954963 containerd[1438]: time="2025-02-13T20:16:30.954950000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:16:30.955552 containerd[1438]: time="2025-02-13T20:16:30.955488520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:16:30.955732 containerd[1438]: time="2025-02-13T20:16:30.955693040Z" level=info msg="Connect containerd service" Feb 13 20:16:30.955817 containerd[1438]: time="2025-02-13T20:16:30.955803240Z" level=info msg="using legacy CRI server" Feb 13 20:16:30.955864 containerd[1438]: time="2025-02-13T20:16:30.955852560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:16:30.956025 containerd[1438]: time="2025-02-13T20:16:30.956007160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:16:30.956740 containerd[1438]: time="2025-02-13T20:16:30.956691160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:16:30.957013 containerd[1438]: time="2025-02-13T20:16:30.956984600Z" level=info msg="Start subscribing containerd event" Feb 13 20:16:30.957092 containerd[1438]: time="2025-02-13T20:16:30.957079080Z" level=info msg="Start recovering state" Feb 13 20:16:30.957609 containerd[1438]: time="2025-02-13T20:16:30.957590000Z" level=info msg="Start event monitor" Feb 13 20:16:30.957679 containerd[1438]: time="2025-02-13T20:16:30.957667000Z" level=info msg="Start snapshots syncer" Feb 13 20:16:30.957773 containerd[1438]: time="2025-02-13T20:16:30.957757360Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:16:30.957824 containerd[1438]: time="2025-02-13T20:16:30.957813520Z" level=info msg="Start streaming server" Feb 13 20:16:30.958745 containerd[1438]: time="2025-02-13T20:16:30.957516800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:16:30.958745 containerd[1438]: time="2025-02-13T20:16:30.958041200Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:16:30.958745 containerd[1438]: time="2025-02-13T20:16:30.958092960Z" level=info msg="containerd successfully booted in 0.043806s" Feb 13 20:16:30.958261 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:16:31.111373 tar[1433]: linux-arm64/README.md Feb 13 20:16:31.129759 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:16:31.334552 sshd_keygen[1427]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:16:31.353692 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:16:31.363039 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:16:31.368805 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:16:31.368969 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:16:31.372858 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:16:31.383535 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:16:31.386052 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:16:31.387844 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 20:16:31.388947 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:16:31.584902 systemd-networkd[1375]: eth0: Gained IPv6LL Feb 13 20:16:31.589752 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:16:31.591204 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:16:31.599956 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 20:16:31.602155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:31.603981 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:16:31.618463 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 20:16:31.618924 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 20:16:31.621382 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:16:31.622298 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:16:32.119823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:32.121307 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:16:32.123154 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:16:32.126880 systemd[1]: Startup finished in 545ms (kernel) + 4.357s (initrd) + 3.094s (userspace) = 7.997s. Feb 13 20:16:32.546996 kubelet[1523]: E0213 20:16:32.546874 1523 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:16:32.549261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:16:32.549407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:16:37.456362 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:16:37.457426 systemd[1]: Started sshd@0-10.0.0.8:22-10.0.0.1:33202.service - OpenSSH per-connection server daemon (10.0.0.1:33202). Feb 13 20:16:37.508503 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 33202 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:16:37.510058 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:37.526114 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:16:37.535930 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:16:37.537320 systemd-logind[1421]: New session 1 of user core. Feb 13 20:16:37.544694 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:16:37.546760 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:16:37.553118 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:16:37.622931 systemd[1541]: Queued start job for default target default.target. Feb 13 20:16:37.635592 systemd[1541]: Created slice app.slice - User Application Slice. Feb 13 20:16:37.635633 systemd[1541]: Reached target paths.target - Paths. Feb 13 20:16:37.635645 systemd[1541]: Reached target timers.target - Timers. Feb 13 20:16:37.636856 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:16:37.646360 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:16:37.646417 systemd[1541]: Reached target sockets.target - Sockets. Feb 13 20:16:37.646428 systemd[1541]: Reached target basic.target - Basic System. Feb 13 20:16:37.646463 systemd[1541]: Reached target default.target - Main User Target. Feb 13 20:16:37.646487 systemd[1541]: Startup finished in 88ms. Feb 13 20:16:37.646779 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:16:37.648090 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:16:37.711407 systemd[1]: Started sshd@1-10.0.0.8:22-10.0.0.1:33206.service - OpenSSH per-connection server daemon (10.0.0.1:33206). Feb 13 20:16:37.757122 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 33206 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:16:37.758286 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:37.762356 systemd-logind[1421]: New session 2 of user core. Feb 13 20:16:37.771854 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:16:37.823513 sshd[1552]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:37.832958 systemd[1]: sshd@1-10.0.0.8:22-10.0.0.1:33206.service: Deactivated successfully. Feb 13 20:16:37.834395 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:16:37.836770 systemd-logind[1421]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:16:37.837890 systemd[1]: Started sshd@2-10.0.0.8:22-10.0.0.1:33208.service - OpenSSH per-connection server daemon (10.0.0.1:33208). Feb 13 20:16:37.840089 systemd-logind[1421]: Removed session 2. Feb 13 20:16:37.872908 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 33208 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:16:37.874104 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:37.877485 systemd-logind[1421]: New session 3 of user core. Feb 13 20:16:37.891862 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:16:37.939564 sshd[1559]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:37.952571 systemd[1]: sshd@2-10.0.0.8:22-10.0.0.1:33208.service: Deactivated successfully. Feb 13 20:16:37.954312 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:16:37.956370 systemd-logind[1421]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:16:37.957257 systemd[1]: Started sshd@3-10.0.0.8:22-10.0.0.1:33220.service - OpenSSH per-connection server daemon (10.0.0.1:33220). Feb 13 20:16:37.958191 systemd-logind[1421]: Removed session 3. Feb 13 20:16:37.992635 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 33220 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:16:37.993820 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:38.000863 systemd-logind[1421]: New session 4 of user core. Feb 13 20:16:38.006861 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:16:38.059483 sshd[1566]: pam_unix(sshd:session): session closed for user core Feb 13 20:16:38.069155 systemd[1]: sshd@3-10.0.0.8:22-10.0.0.1:33220.service: Deactivated successfully. Feb 13 20:16:38.072089 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:16:38.073275 systemd-logind[1421]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:16:38.081047 systemd[1]: Started sshd@4-10.0.0.8:22-10.0.0.1:33226.service - OpenSSH per-connection server daemon (10.0.0.1:33226). Feb 13 20:16:38.081931 systemd-logind[1421]: Removed session 4. Feb 13 20:16:38.112232 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 33226 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:16:38.113422 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:16:38.117334 systemd-logind[1421]: New session 5 of user core. Feb 13 20:16:38.130846 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:16:38.191411 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:16:38.191692 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:16:38.498944 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:16:38.499124 (dockerd)[1594]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:16:38.755909 dockerd[1594]: time="2025-02-13T20:16:38.755787873Z" level=info msg="Starting up" Feb 13 20:16:38.863811 dockerd[1594]: time="2025-02-13T20:16:38.863760022Z" level=info msg="Loading containers: start." Feb 13 20:16:38.947746 kernel: Initializing XFRM netlink socket Feb 13 20:16:39.016816 systemd-networkd[1375]: docker0: Link UP Feb 13 20:16:39.036015 dockerd[1594]: time="2025-02-13T20:16:39.035962789Z" level=info msg="Loading containers: done." Feb 13 20:16:39.056553 dockerd[1594]: time="2025-02-13T20:16:39.056501309Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:16:39.056743 dockerd[1594]: time="2025-02-13T20:16:39.056607420Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:16:39.056743 dockerd[1594]: time="2025-02-13T20:16:39.056728056Z" level=info msg="Daemon has completed initialization" Feb 13 20:16:39.085318 dockerd[1594]: time="2025-02-13T20:16:39.084799845Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:16:39.085504 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:16:39.532988 containerd[1438]: time="2025-02-13T20:16:39.532700260Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 20:16:40.178304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666568681.mount: Deactivated successfully. Feb 13 20:16:42.109616 containerd[1438]: time="2025-02-13T20:16:42.109568587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:42.111067 containerd[1438]: time="2025-02-13T20:16:42.111027819Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218238" Feb 13 20:16:42.113723 containerd[1438]: time="2025-02-13T20:16:42.112272114Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:42.114699 containerd[1438]: time="2025-02-13T20:16:42.114665118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:42.116208 containerd[1438]: time="2025-02-13T20:16:42.116164802Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 2.583404485s" Feb 13 20:16:42.116208 containerd[1438]: time="2025-02-13T20:16:42.116204772Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\"" Feb 13 20:16:42.116874 containerd[1438]: time="2025-02-13T20:16:42.116836462Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 20:16:42.799787 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:16:42.812875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:42.913253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:42.917131 (kubelet)[1806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:16:42.952959 kubelet[1806]: E0213 20:16:42.952900 1806 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:16:42.955981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:16:42.956138 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:16:44.129175 containerd[1438]: time="2025-02-13T20:16:44.129120420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:44.129547 containerd[1438]: time="2025-02-13T20:16:44.129500636Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528147" Feb 13 20:16:44.130413 containerd[1438]: time="2025-02-13T20:16:44.130375897Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:44.134060 containerd[1438]: time="2025-02-13T20:16:44.134022057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:44.135102 containerd[1438]: time="2025-02-13T20:16:44.135064481Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 2.017762693s" Feb 13 20:16:44.135142 containerd[1438]: time="2025-02-13T20:16:44.135102290Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\"" Feb 13 20:16:44.135615 containerd[1438]: time="2025-02-13T20:16:44.135537840Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 20:16:45.741347 containerd[1438]: time="2025-02-13T20:16:45.741277546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:45.742399 containerd[1438]: time="2025-02-13T20:16:45.742339045Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480802" Feb 13 20:16:45.743160 containerd[1438]: time="2025-02-13T20:16:45.743123037Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:45.746060 containerd[1438]: time="2025-02-13T20:16:45.746023307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:45.747256 containerd[1438]: time="2025-02-13T20:16:45.747222480Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 1.611532802s" Feb 13 20:16:45.747306 containerd[1438]: time="2025-02-13T20:16:45.747258009Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\"" Feb 13 20:16:45.747884 containerd[1438]: time="2025-02-13T20:16:45.747834710Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 20:16:46.788782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount66167839.mount: Deactivated successfully. Feb 13 20:16:47.129307 containerd[1438]: time="2025-02-13T20:16:47.129196260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:47.130311 containerd[1438]: time="2025-02-13T20:16:47.130270027Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363384" Feb 13 20:16:47.131270 containerd[1438]: time="2025-02-13T20:16:47.131219885Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:47.133371 containerd[1438]: time="2025-02-13T20:16:47.133307564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:47.134587 containerd[1438]: time="2025-02-13T20:16:47.134432022Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.38654666s" Feb 13 20:16:47.134587 containerd[1438]: time="2025-02-13T20:16:47.134468991Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 20:16:47.135089 containerd[1438]: time="2025-02-13T20:16:47.135047884Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 20:16:47.769302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3112784987.mount: Deactivated successfully. Feb 13 20:16:49.007376 containerd[1438]: time="2025-02-13T20:16:49.007224190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:49.007807 containerd[1438]: time="2025-02-13T20:16:49.007706014Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Feb 13 20:16:49.008694 containerd[1438]: time="2025-02-13T20:16:49.008643976Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:49.011980 containerd[1438]: time="2025-02-13T20:16:49.011939046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:49.014992 containerd[1438]: time="2025-02-13T20:16:49.014906605Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.879823194s" Feb 13 20:16:49.014992 containerd[1438]: time="2025-02-13T20:16:49.014943653Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Feb 13 20:16:49.015654 containerd[1438]: time="2025-02-13T20:16:49.015488291Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:16:49.464318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount160621892.mount: Deactivated successfully. Feb 13 20:16:49.468892 containerd[1438]: time="2025-02-13T20:16:49.468851747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:49.469437 containerd[1438]: time="2025-02-13T20:16:49.469415829Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 20:16:49.470145 containerd[1438]: time="2025-02-13T20:16:49.470116500Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:49.472237 containerd[1438]: time="2025-02-13T20:16:49.472203029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:49.473059 containerd[1438]: time="2025-02-13T20:16:49.473027447Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 457.51415ms" Feb 13 20:16:49.473133 containerd[1438]: time="2025-02-13T20:16:49.473060334Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 20:16:49.473515 containerd[1438]: time="2025-02-13T20:16:49.473438216Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 20:16:50.080557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1577815254.mount: Deactivated successfully. Feb 13 20:16:53.206411 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:16:53.222127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:53.316175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:53.319793 (kubelet)[1950]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:16:53.339756 containerd[1438]: time="2025-02-13T20:16:53.339484025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:53.340700 containerd[1438]: time="2025-02-13T20:16:53.340240249Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Feb 13 20:16:53.341561 containerd[1438]: time="2025-02-13T20:16:53.341530574Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:53.345741 containerd[1438]: time="2025-02-13T20:16:53.345240718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:53.346692 containerd[1438]: time="2025-02-13T20:16:53.346652426Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.873186484s" Feb 13 20:16:53.346692 containerd[1438]: time="2025-02-13T20:16:53.346690073Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Feb 13 20:16:53.360815 kubelet[1950]: E0213 20:16:53.360759 1950 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:16:53.365149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:16:53.365310 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:16:58.898626 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:58.907922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:58.933054 systemd[1]: Reloading requested from client PID 1987 ('systemctl') (unit session-5.scope)... Feb 13 20:16:58.933069 systemd[1]: Reloading... Feb 13 20:16:58.999751 zram_generator::config[2029]: No configuration found. Feb 13 20:16:59.215306 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:16:59.266809 systemd[1]: Reloading finished in 333 ms. Feb 13 20:16:59.305412 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:59.308068 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:16:59.308265 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:59.309666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:16:59.407075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:16:59.410635 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:16:59.442480 kubelet[2073]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:16:59.442480 kubelet[2073]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:16:59.442480 kubelet[2073]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:16:59.442832 kubelet[2073]: I0213 20:16:59.442534 2073 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:17:00.018545 kubelet[2073]: I0213 20:17:00.018498 2073 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:17:00.018545 kubelet[2073]: I0213 20:17:00.018537 2073 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:17:00.018985 kubelet[2073]: I0213 20:17:00.018969 2073 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:17:00.063093 kubelet[2073]: E0213 20:17:00.063046 2073 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:00.064075 kubelet[2073]: I0213 20:17:00.064037 2073 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:17:00.072719 kubelet[2073]: E0213 20:17:00.072663 2073 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:17:00.072719 kubelet[2073]: I0213 20:17:00.072703 2073 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:17:00.075258 kubelet[2073]: I0213 20:17:00.075225 2073 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:17:00.075466 kubelet[2073]: I0213 20:17:00.075430 2073 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:17:00.075627 kubelet[2073]: I0213 20:17:00.075455 2073 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:17:00.075702 kubelet[2073]: I0213 20:17:00.075693 2073 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:17:00.075746 kubelet[2073]: I0213 20:17:00.075703 2073 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:17:00.075956 kubelet[2073]: I0213 20:17:00.075930 2073 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:17:00.078335 kubelet[2073]: I0213 20:17:00.078308 2073 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:17:00.078335 kubelet[2073]: I0213 20:17:00.078334 2073 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:17:00.078452 kubelet[2073]: I0213 20:17:00.078430 2073 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:17:00.078477 kubelet[2073]: I0213 20:17:00.078450 2073 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:17:00.084533 kubelet[2073]: W0213 20:17:00.084419 2073 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 20:17:00.084533 kubelet[2073]: E0213 20:17:00.084479 2073 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:00.084875 kubelet[2073]: W0213 20:17:00.084813 2073 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 20:17:00.084875 kubelet[2073]: E0213 20:17:00.084852 2073 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:00.086245 kubelet[2073]: I0213 20:17:00.086217 2073 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:17:00.088435 kubelet[2073]: I0213 20:17:00.088169 2073 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:17:00.088435 kubelet[2073]: W0213 20:17:00.088298 2073 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:17:00.089652 kubelet[2073]: I0213 20:17:00.089626 2073 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:17:00.089887 kubelet[2073]: I0213 20:17:00.089763 2073 server.go:1287] "Started kubelet" Feb 13 20:17:00.090075 kubelet[2073]: I0213 20:17:00.090037 2073 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:17:00.090336 kubelet[2073]: I0213 20:17:00.090242 2073 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:17:00.091232 kubelet[2073]: I0213 20:17:00.090705 2073 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:17:00.091232 kubelet[2073]: I0213 20:17:00.090947 2073 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:17:00.095450 kubelet[2073]: E0213 20:17:00.092561 2073 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.8:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.8:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dddc791e981c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:17:00.089735196 +0000 UTC m=+0.676246656,LastTimestamp:2025-02-13 20:17:00.089735196 +0000 UTC m=+0.676246656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:17:00.096222 kubelet[2073]: I0213 20:17:00.096200 2073 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:17:00.096470 kubelet[2073]: I0213 20:17:00.096457 2073 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:17:00.097228 kubelet[2073]: W0213 20:17:00.097184 2073 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 20:17:00.097288 kubelet[2073]: E0213 20:17:00.097237 2073 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:00.097609 kubelet[2073]: E0213 20:17:00.097300 2073 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="200ms" Feb 13 20:17:00.097825 kubelet[2073]: E0213 20:17:00.097581 2073 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:17:00.098062 kubelet[2073]: I0213 20:17:00.098024 2073 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:17:00.098180 kubelet[2073]: I0213 20:17:00.098157 2073 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:17:00.098224 kubelet[2073]: E0213 20:17:00.098181 2073 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:17:00.098691 kubelet[2073]: I0213 20:17:00.096558 2073 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:17:00.098771 kubelet[2073]: I0213 20:17:00.098750 2073 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:17:00.098960 kubelet[2073]: I0213 20:17:00.098805 2073 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:17:00.099105 kubelet[2073]: I0213 20:17:00.099084 2073 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:17:00.110014 kubelet[2073]: I0213 20:17:00.109982 2073 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:17:00.110014 kubelet[2073]: I0213 20:17:00.110005 2073 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:17:00.110133 kubelet[2073]: I0213 20:17:00.110022 2073 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:17:00.110489 kubelet[2073]: I0213 20:17:00.110445 2073 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:17:00.111657 kubelet[2073]: I0213 20:17:00.111628 2073 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:17:00.111657 kubelet[2073]: I0213 20:17:00.111650 2073 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:17:00.111766 kubelet[2073]: I0213 20:17:00.111668 2073 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:17:00.111766 kubelet[2073]: I0213 20:17:00.111677 2073 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:17:00.111766 kubelet[2073]: E0213 20:17:00.111727 2073 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:17:00.178016 kubelet[2073]: I0213 20:17:00.177963 2073 policy_none.go:49] "None policy: Start" Feb 13 20:17:00.178016 kubelet[2073]: I0213 20:17:00.177994 2073 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:17:00.178016 kubelet[2073]: I0213 20:17:00.178008 2073 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:17:00.178389 kubelet[2073]: W0213 20:17:00.178335 2073 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 20:17:00.178435 kubelet[2073]: E0213 20:17:00.178403 2073 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:00.183790 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:17:00.196193 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:17:00.198007 kubelet[2073]: E0213 20:17:00.197986 2073 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:17:00.199376 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:17:00.209390 kubelet[2073]: I0213 20:17:00.209353 2073 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:17:00.209764 kubelet[2073]: I0213 20:17:00.209542 2073 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:17:00.209829 kubelet[2073]: I0213 20:17:00.209791 2073 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:17:00.210037 kubelet[2073]: I0213 20:17:00.210015 2073 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:17:00.210728 kubelet[2073]: E0213 20:17:00.210684 2073 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:17:00.210779 kubelet[2073]: E0213 20:17:00.210740 2073 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 20:17:00.220248 systemd[1]: Created slice kubepods-burstable-pod1427cd21992819afae4f0d646f10fcdc.slice - libcontainer container kubepods-burstable-pod1427cd21992819afae4f0d646f10fcdc.slice. Feb 13 20:17:00.240501 kubelet[2073]: E0213 20:17:00.240256 2073 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:17:00.241809 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 20:17:00.254767 kubelet[2073]: E0213 20:17:00.254728 2073 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:17:00.257077 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 20:17:00.258675 kubelet[2073]: E0213 20:17:00.258641 2073 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:17:00.298141 kubelet[2073]: E0213 20:17:00.298028 2073 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="400ms" Feb 13 20:17:00.299290 kubelet[2073]: I0213 20:17:00.299221 2073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1427cd21992819afae4f0d646f10fcdc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1427cd21992819afae4f0d646f10fcdc\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:00.311536 kubelet[2073]: I0213 20:17:00.311517 2073 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:17:00.312131 kubelet[2073]: E0213 20:17:00.312106 2073 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Feb 13 20:17:00.399436 kubelet[2073]: I0213 20:17:00.399408 2073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:17:00.399623 kubelet[2073]: I0213 20:17:00.399582 2073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:00.399766 kubelet[2073]: I0213 20:17:00.399750 2073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:00.399879 kubelet[2073]: I0213 20:17:00.399860 2073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:00.400100 kubelet[2073]: I0213 20:17:00.400027 2073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:00.400100 kubelet[2073]: I0213 20:17:00.400053 2073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:00.400277 kubelet[2073]: I0213 20:17:00.400072 2073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1427cd21992819afae4f0d646f10fcdc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1427cd21992819afae4f0d646f10fcdc\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:00.400277 kubelet[2073]: I0213 20:17:00.400221 2073 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1427cd21992819afae4f0d646f10fcdc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1427cd21992819afae4f0d646f10fcdc\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:00.514158 kubelet[2073]: I0213 20:17:00.514087 2073 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:17:00.514519 kubelet[2073]: E0213 20:17:00.514473 2073 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Feb 13 20:17:00.540894 kubelet[2073]: E0213 20:17:00.540861 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:00.541454 containerd[1438]: time="2025-02-13T20:17:00.541418323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1427cd21992819afae4f0d646f10fcdc,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:00.555861 kubelet[2073]: E0213 20:17:00.555704 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:00.556551 containerd[1438]: time="2025-02-13T20:17:00.556523299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:00.559850 kubelet[2073]: E0213 20:17:00.559826 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:00.560259 containerd[1438]: time="2025-02-13T20:17:00.560224181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:00.698798 kubelet[2073]: E0213 20:17:00.698747 2073 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="800ms" Feb 13 20:17:00.915894 kubelet[2073]: I0213 20:17:00.915855 2073 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:17:00.916287 kubelet[2073]: E0213 20:17:00.916247 2073 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Feb 13 20:17:00.992829 kubelet[2073]: W0213 20:17:00.992767 2073 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 20:17:00.992970 kubelet[2073]: E0213 20:17:00.992838 2073 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:01.001232 kubelet[2073]: W0213 20:17:01.001165 2073 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 20:17:01.001232 kubelet[2073]: E0213 20:17:01.001229 2073 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:01.002260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4268381823.mount: Deactivated successfully. Feb 13 20:17:01.007169 containerd[1438]: time="2025-02-13T20:17:01.007122631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:17:01.008310 containerd[1438]: time="2025-02-13T20:17:01.008241995Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:17:01.009561 containerd[1438]: time="2025-02-13T20:17:01.008855046Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:17:01.009561 containerd[1438]: time="2025-02-13T20:17:01.009157210Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:17:01.009561 containerd[1438]: time="2025-02-13T20:17:01.009473257Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:17:01.010039 containerd[1438]: time="2025-02-13T20:17:01.010013256Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 20:17:01.010666 containerd[1438]: time="2025-02-13T20:17:01.010643389Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:17:01.015045 containerd[1438]: time="2025-02-13T20:17:01.015010192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:17:01.016158 containerd[1438]: time="2025-02-13T20:17:01.016001098Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 455.709106ms" Feb 13 20:17:01.016754 containerd[1438]: time="2025-02-13T20:17:01.016725884Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 460.144017ms" Feb 13 20:17:01.018617 containerd[1438]: time="2025-02-13T20:17:01.018583358Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 477.087784ms" Feb 13 20:17:01.044102 kubelet[2073]: W0213 20:17:01.043996 2073 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 20:17:01.044102 kubelet[2073]: E0213 20:17:01.044058 2073 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:01.171572 containerd[1438]: time="2025-02-13T20:17:01.171387656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:01.171572 containerd[1438]: time="2025-02-13T20:17:01.171443304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:01.171572 containerd[1438]: time="2025-02-13T20:17:01.171458746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:01.171572 containerd[1438]: time="2025-02-13T20:17:01.171419860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:01.171572 containerd[1438]: time="2025-02-13T20:17:01.171474188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:01.171572 containerd[1438]: time="2025-02-13T20:17:01.171490111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:01.171572 containerd[1438]: time="2025-02-13T20:17:01.171530957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:01.172182 containerd[1438]: time="2025-02-13T20:17:01.171577924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:01.174829 containerd[1438]: time="2025-02-13T20:17:01.174493713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:01.174829 containerd[1438]: time="2025-02-13T20:17:01.174561523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:01.174829 containerd[1438]: time="2025-02-13T20:17:01.174573205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:01.174829 containerd[1438]: time="2025-02-13T20:17:01.174651216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:01.197867 systemd[1]: Started cri-containerd-b31a30cec1e1b101be47352bf5503d7a786a132ae212f483bc282cc9c319fec9.scope - libcontainer container b31a30cec1e1b101be47352bf5503d7a786a132ae212f483bc282cc9c319fec9. Feb 13 20:17:01.202143 systemd[1]: Started cri-containerd-14e4cd89262f39d24ee1a5528b05d5837d27820d88e0c47c3300febd2cbac380.scope - libcontainer container 14e4cd89262f39d24ee1a5528b05d5837d27820d88e0c47c3300febd2cbac380. Feb 13 20:17:01.203791 systemd[1]: Started cri-containerd-bef011e107bc90912f4b2243fe7567ca2ad24b26648675b77915cefe9137e573.scope - libcontainer container bef011e107bc90912f4b2243fe7567ca2ad24b26648675b77915cefe9137e573. Feb 13 20:17:01.230470 containerd[1438]: time="2025-02-13T20:17:01.230426588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"b31a30cec1e1b101be47352bf5503d7a786a132ae212f483bc282cc9c319fec9\"" Feb 13 20:17:01.231691 kubelet[2073]: E0213 20:17:01.231640 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:01.236509 containerd[1438]: time="2025-02-13T20:17:01.236047216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"14e4cd89262f39d24ee1a5528b05d5837d27820d88e0c47c3300febd2cbac380\"" Feb 13 20:17:01.237207 kubelet[2073]: E0213 20:17:01.237179 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:01.237560 containerd[1438]: time="2025-02-13T20:17:01.236976032Z" level=info msg="CreateContainer within sandbox \"b31a30cec1e1b101be47352bf5503d7a786a132ae212f483bc282cc9c319fec9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:17:01.239987 containerd[1438]: time="2025-02-13T20:17:01.239955791Z" level=info msg="CreateContainer within sandbox \"14e4cd89262f39d24ee1a5528b05d5837d27820d88e0c47c3300febd2cbac380\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:17:01.240046 containerd[1438]: time="2025-02-13T20:17:01.240014360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1427cd21992819afae4f0d646f10fcdc,Namespace:kube-system,Attempt:0,} returns sandbox id \"bef011e107bc90912f4b2243fe7567ca2ad24b26648675b77915cefe9137e573\"" Feb 13 20:17:01.240490 kubelet[2073]: E0213 20:17:01.240471 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:01.241808 containerd[1438]: time="2025-02-13T20:17:01.241774099Z" level=info msg="CreateContainer within sandbox \"bef011e107bc90912f4b2243fe7567ca2ad24b26648675b77915cefe9137e573\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:17:01.253835 containerd[1438]: time="2025-02-13T20:17:01.253762824Z" level=info msg="CreateContainer within sandbox \"bef011e107bc90912f4b2243fe7567ca2ad24b26648675b77915cefe9137e573\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aa4c496ecfa68521fec7a5041baed726218306e4c96461ec3e9ed7b64c45119c\"" Feb 13 20:17:01.254571 containerd[1438]: time="2025-02-13T20:17:01.254533657Z" level=info msg="StartContainer for \"aa4c496ecfa68521fec7a5041baed726218306e4c96461ec3e9ed7b64c45119c\"" Feb 13 20:17:01.259834 containerd[1438]: time="2025-02-13T20:17:01.259783110Z" level=info msg="CreateContainer within sandbox \"14e4cd89262f39d24ee1a5528b05d5837d27820d88e0c47c3300febd2cbac380\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f5a508f47751d05f1880089a2dadd36a59009aa33d57fb8c1fa7e71a3be738d8\"" Feb 13 20:17:01.260254 containerd[1438]: time="2025-02-13T20:17:01.260230656Z" level=info msg="StartContainer for \"f5a508f47751d05f1880089a2dadd36a59009aa33d57fb8c1fa7e71a3be738d8\"" Feb 13 20:17:01.261522 containerd[1438]: time="2025-02-13T20:17:01.261489962Z" level=info msg="CreateContainer within sandbox \"b31a30cec1e1b101be47352bf5503d7a786a132ae212f483bc282cc9c319fec9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"36ffc1ad72ec68c0f95df59df38fa6a5b599ae0e0049320e07f55fedc9da66c0\"" Feb 13 20:17:01.261942 containerd[1438]: time="2025-02-13T20:17:01.261913464Z" level=info msg="StartContainer for \"36ffc1ad72ec68c0f95df59df38fa6a5b599ae0e0049320e07f55fedc9da66c0\"" Feb 13 20:17:01.282873 systemd[1]: Started cri-containerd-aa4c496ecfa68521fec7a5041baed726218306e4c96461ec3e9ed7b64c45119c.scope - libcontainer container aa4c496ecfa68521fec7a5041baed726218306e4c96461ec3e9ed7b64c45119c. Feb 13 20:17:01.293881 systemd[1]: Started cri-containerd-36ffc1ad72ec68c0f95df59df38fa6a5b599ae0e0049320e07f55fedc9da66c0.scope - libcontainer container 36ffc1ad72ec68c0f95df59df38fa6a5b599ae0e0049320e07f55fedc9da66c0. Feb 13 20:17:01.295765 systemd[1]: Started cri-containerd-f5a508f47751d05f1880089a2dadd36a59009aa33d57fb8c1fa7e71a3be738d8.scope - libcontainer container f5a508f47751d05f1880089a2dadd36a59009aa33d57fb8c1fa7e71a3be738d8. Feb 13 20:17:01.323801 containerd[1438]: time="2025-02-13T20:17:01.323686639Z" level=info msg="StartContainer for \"aa4c496ecfa68521fec7a5041baed726218306e4c96461ec3e9ed7b64c45119c\" returns successfully" Feb 13 20:17:01.351290 containerd[1438]: time="2025-02-13T20:17:01.351246577Z" level=info msg="StartContainer for \"36ffc1ad72ec68c0f95df59df38fa6a5b599ae0e0049320e07f55fedc9da66c0\" returns successfully" Feb 13 20:17:01.351290 containerd[1438]: time="2025-02-13T20:17:01.351246497Z" level=info msg="StartContainer for \"f5a508f47751d05f1880089a2dadd36a59009aa33d57fb8c1fa7e71a3be738d8\" returns successfully" Feb 13 20:17:01.499520 kubelet[2073]: E0213 20:17:01.499399 2073 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="1.6s" Feb 13 20:17:01.503481 kubelet[2073]: W0213 20:17:01.503257 2073 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 20:17:01.507043 kubelet[2073]: E0213 20:17:01.504996 2073 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:17:01.717376 kubelet[2073]: I0213 20:17:01.717352 2073 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:17:02.135695 kubelet[2073]: E0213 20:17:02.135478 2073 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:17:02.135695 kubelet[2073]: E0213 20:17:02.135601 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:02.137530 kubelet[2073]: E0213 20:17:02.137506 2073 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:17:02.137879 kubelet[2073]: E0213 20:17:02.137821 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:02.138961 kubelet[2073]: E0213 20:17:02.138789 2073 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:17:02.138961 kubelet[2073]: E0213 20:17:02.138892 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:03.077856 kubelet[2073]: I0213 20:17:03.077806 2073 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 20:17:03.077856 kubelet[2073]: E0213 20:17:03.077850 2073 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 20:17:03.086828 kubelet[2073]: E0213 20:17:03.086794 2073 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:17:03.141183 kubelet[2073]: E0213 20:17:03.140659 2073 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:17:03.141183 kubelet[2073]: E0213 20:17:03.140795 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:03.141183 kubelet[2073]: E0213 20:17:03.141007 2073 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:17:03.141183 kubelet[2073]: E0213 20:17:03.141084 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:03.156628 kubelet[2073]: E0213 20:17:03.156598 2073 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Feb 13 20:17:03.187761 kubelet[2073]: E0213 20:17:03.187720 2073 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:17:03.288722 kubelet[2073]: E0213 20:17:03.288667 2073 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:17:03.389819 kubelet[2073]: E0213 20:17:03.389393 2073 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:17:03.490254 kubelet[2073]: E0213 20:17:03.490208 2073 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:17:03.590347 kubelet[2073]: E0213 20:17:03.590297 2073 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:17:03.691477 kubelet[2073]: E0213 20:17:03.691422 2073 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:17:03.791699 kubelet[2073]: E0213 20:17:03.791668 2073 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:17:03.892529 kubelet[2073]: E0213 20:17:03.892489 2073 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:17:03.993846 kubelet[2073]: E0213 20:17:03.993230 2073 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:17:04.089376 kubelet[2073]: I0213 20:17:04.089333 2073 apiserver.go:52] "Watching apiserver" Feb 13 20:17:04.099201 kubelet[2073]: I0213 20:17:04.099171 2073 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:17:04.099290 kubelet[2073]: I0213 20:17:04.099211 2073 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 20:17:04.110944 kubelet[2073]: I0213 20:17:04.110909 2073 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:04.115181 kubelet[2073]: I0213 20:17:04.115151 2073 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:04.118869 kubelet[2073]: E0213 20:17:04.118846 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:04.141098 kubelet[2073]: I0213 20:17:04.141066 2073 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:04.141197 kubelet[2073]: I0213 20:17:04.141178 2073 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 20:17:04.145202 kubelet[2073]: E0213 20:17:04.145173 2073 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 20:17:04.145333 kubelet[2073]: E0213 20:17:04.145314 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:04.147019 kubelet[2073]: E0213 20:17:04.145841 2073 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:04.147019 kubelet[2073]: E0213 20:17:04.145963 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:04.858649 systemd[1]: Reloading requested from client PID 2355 ('systemctl') (unit session-5.scope)... Feb 13 20:17:04.858665 systemd[1]: Reloading... Feb 13 20:17:04.919894 zram_generator::config[2394]: No configuration found. Feb 13 20:17:05.004405 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:17:05.068297 systemd[1]: Reloading finished in 209 ms. Feb 13 20:17:05.100748 kubelet[2073]: I0213 20:17:05.100697 2073 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:17:05.100857 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:17:05.113867 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:17:05.114089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:17:05.114138 systemd[1]: kubelet.service: Consumed 1.036s CPU time, 124.9M memory peak, 0B memory swap peak. Feb 13 20:17:05.119001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:17:05.215556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:17:05.220437 (kubelet)[2436]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:17:05.255613 kubelet[2436]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:17:05.255613 kubelet[2436]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:17:05.255613 kubelet[2436]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:17:05.255613 kubelet[2436]: I0213 20:17:05.255158 2436 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:17:05.267587 kubelet[2436]: I0213 20:17:05.266139 2436 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:17:05.267587 kubelet[2436]: I0213 20:17:05.266167 2436 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:17:05.267587 kubelet[2436]: I0213 20:17:05.266413 2436 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:17:05.267761 kubelet[2436]: I0213 20:17:05.267744 2436 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:17:05.271846 kubelet[2436]: I0213 20:17:05.271821 2436 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:17:05.276887 kubelet[2436]: E0213 20:17:05.276839 2436 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:17:05.276887 kubelet[2436]: I0213 20:17:05.276884 2436 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:17:05.279828 kubelet[2436]: I0213 20:17:05.279794 2436 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:17:05.280032 kubelet[2436]: I0213 20:17:05.279987 2436 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:17:05.280289 kubelet[2436]: I0213 20:17:05.280021 2436 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:17:05.280372 kubelet[2436]: I0213 20:17:05.280294 2436 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:17:05.280372 kubelet[2436]: I0213 20:17:05.280304 2436 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:17:05.280372 kubelet[2436]: I0213 20:17:05.280349 2436 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:17:05.280512 kubelet[2436]: I0213 20:17:05.280495 2436 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:17:05.280541 kubelet[2436]: I0213 20:17:05.280514 2436 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:17:05.280541 kubelet[2436]: I0213 20:17:05.280531 2436 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:17:05.280541 kubelet[2436]: I0213 20:17:05.280540 2436 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:17:05.282874 kubelet[2436]: I0213 20:17:05.282100 2436 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:17:05.282874 kubelet[2436]: I0213 20:17:05.282624 2436 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:17:05.283134 kubelet[2436]: I0213 20:17:05.283109 2436 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:17:05.283172 kubelet[2436]: I0213 20:17:05.283149 2436 server.go:1287] "Started kubelet" Feb 13 20:17:05.284845 kubelet[2436]: I0213 20:17:05.284770 2436 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:17:05.284937 kubelet[2436]: I0213 20:17:05.284914 2436 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:17:05.285335 kubelet[2436]: I0213 20:17:05.285298 2436 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:17:05.285418 kubelet[2436]: I0213 20:17:05.285378 2436 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:17:05.291345 kubelet[2436]: I0213 20:17:05.291304 2436 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:17:05.296107 kubelet[2436]: I0213 20:17:05.296058 2436 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:17:05.296481 kubelet[2436]: I0213 20:17:05.296218 2436 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:17:05.296481 kubelet[2436]: I0213 20:17:05.296345 2436 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:17:05.296481 kubelet[2436]: E0213 20:17:05.296370 2436 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:17:05.297195 kubelet[2436]: I0213 20:17:05.297175 2436 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:17:05.305735 kubelet[2436]: I0213 20:17:05.302804 2436 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:17:05.307172 kubelet[2436]: I0213 20:17:05.307133 2436 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:17:05.308793 kubelet[2436]: I0213 20:17:05.308077 2436 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:17:05.308793 kubelet[2436]: I0213 20:17:05.308097 2436 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:17:05.308793 kubelet[2436]: I0213 20:17:05.308115 2436 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:17:05.308793 kubelet[2436]: I0213 20:17:05.308121 2436 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:17:05.308793 kubelet[2436]: E0213 20:17:05.308159 2436 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:17:05.311935 kubelet[2436]: I0213 20:17:05.311586 2436 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:17:05.311935 kubelet[2436]: I0213 20:17:05.311609 2436 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:17:05.318492 kubelet[2436]: E0213 20:17:05.318467 2436 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:17:05.341787 kubelet[2436]: I0213 20:17:05.341765 2436 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:17:05.341787 kubelet[2436]: I0213 20:17:05.341783 2436 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:17:05.341890 kubelet[2436]: I0213 20:17:05.341801 2436 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:17:05.341954 kubelet[2436]: I0213 20:17:05.341938 2436 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:17:05.341983 kubelet[2436]: I0213 20:17:05.341953 2436 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:17:05.341983 kubelet[2436]: I0213 20:17:05.341970 2436 policy_none.go:49] "None policy: Start" Feb 13 20:17:05.341983 kubelet[2436]: I0213 20:17:05.341978 2436 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:17:05.342043 kubelet[2436]: I0213 20:17:05.341986 2436 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:17:05.342082 kubelet[2436]: I0213 20:17:05.342072 2436 state_mem.go:75] "Updated machine memory state" Feb 13 20:17:05.345377 kubelet[2436]: I0213 20:17:05.345360 2436 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:17:05.345526 kubelet[2436]: I0213 20:17:05.345504 2436 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:17:05.345557 kubelet[2436]: I0213 20:17:05.345515 2436 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:17:05.345795 kubelet[2436]: I0213 20:17:05.345699 2436 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:17:05.347032 kubelet[2436]: E0213 20:17:05.346924 2436 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:17:05.408802 kubelet[2436]: I0213 20:17:05.408781 2436 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 20:17:05.409111 kubelet[2436]: I0213 20:17:05.408802 2436 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:05.409111 kubelet[2436]: I0213 20:17:05.408824 2436 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:05.414355 kubelet[2436]: E0213 20:17:05.414315 2436 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:05.414686 kubelet[2436]: E0213 20:17:05.414666 2436 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:05.414777 kubelet[2436]: E0213 20:17:05.414670 2436 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 20:17:05.449224 kubelet[2436]: I0213 20:17:05.449189 2436 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:17:05.454983 kubelet[2436]: I0213 20:17:05.454942 2436 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 20:17:05.455081 kubelet[2436]: I0213 20:17:05.455068 2436 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 20:17:05.497656 kubelet[2436]: I0213 20:17:05.497595 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1427cd21992819afae4f0d646f10fcdc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1427cd21992819afae4f0d646f10fcdc\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:05.497656 kubelet[2436]: I0213 20:17:05.497649 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1427cd21992819afae4f0d646f10fcdc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1427cd21992819afae4f0d646f10fcdc\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:05.497656 kubelet[2436]: I0213 20:17:05.497668 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:05.497905 kubelet[2436]: I0213 20:17:05.497685 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1427cd21992819afae4f0d646f10fcdc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1427cd21992819afae4f0d646f10fcdc\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:05.497905 kubelet[2436]: I0213 20:17:05.497724 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:05.497905 kubelet[2436]: I0213 20:17:05.497741 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:05.497905 kubelet[2436]: I0213 20:17:05.497758 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:05.497905 kubelet[2436]: I0213 20:17:05.497774 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:05.498013 kubelet[2436]: I0213 20:17:05.497790 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:17:05.714965 kubelet[2436]: E0213 20:17:05.714859 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:05.714965 kubelet[2436]: E0213 20:17:05.714881 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:05.715074 kubelet[2436]: E0213 20:17:05.714988 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:06.283021 kubelet[2436]: I0213 20:17:06.281761 2436 apiserver.go:52] "Watching apiserver" Feb 13 20:17:06.296608 kubelet[2436]: I0213 20:17:06.296554 2436 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:17:06.329977 kubelet[2436]: I0213 20:17:06.329943 2436 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 20:17:06.330451 kubelet[2436]: I0213 20:17:06.329991 2436 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:06.330722 kubelet[2436]: I0213 20:17:06.330110 2436 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:06.337812 kubelet[2436]: E0213 20:17:06.337311 2436 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:06.337812 kubelet[2436]: E0213 20:17:06.337338 2436 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:06.337812 kubelet[2436]: E0213 20:17:06.337471 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:06.337812 kubelet[2436]: E0213 20:17:06.337518 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:06.340811 kubelet[2436]: E0213 20:17:06.340784 2436 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 20:17:06.341013 kubelet[2436]: E0213 20:17:06.340998 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:06.359478 kubelet[2436]: I0213 20:17:06.359113 2436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.359094048 podStartE2EDuration="2.359094048s" podCreationTimestamp="2025-02-13 20:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:06.352035201 +0000 UTC m=+1.127682410" watchObservedRunningTime="2025-02-13 20:17:06.359094048 +0000 UTC m=+1.134741257" Feb 13 20:17:06.359478 kubelet[2436]: I0213 20:17:06.359224 2436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.359219184 podStartE2EDuration="2.359219184s" podCreationTimestamp="2025-02-13 20:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:06.359083526 +0000 UTC m=+1.134730735" watchObservedRunningTime="2025-02-13 20:17:06.359219184 +0000 UTC m=+1.134866353" Feb 13 20:17:06.394037 kubelet[2436]: I0213 20:17:06.393878 2436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.393862416 podStartE2EDuration="2.393862416s" podCreationTimestamp="2025-02-13 20:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:06.393548776 +0000 UTC m=+1.169195985" watchObservedRunningTime="2025-02-13 20:17:06.393862416 +0000 UTC m=+1.169509585" Feb 13 20:17:06.585459 sudo[1576]: pam_unix(sudo:session): session closed for user root Feb 13 20:17:06.587545 sshd[1573]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:06.591841 systemd[1]: sshd@4-10.0.0.8:22-10.0.0.1:33226.service: Deactivated successfully. Feb 13 20:17:06.594308 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:17:06.594757 systemd[1]: session-5.scope: Consumed 6.875s CPU time, 155.2M memory peak, 0B memory swap peak. Feb 13 20:17:06.596303 systemd-logind[1421]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:17:06.597426 systemd-logind[1421]: Removed session 5. Feb 13 20:17:07.331012 kubelet[2436]: E0213 20:17:07.330704 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:07.331863 kubelet[2436]: E0213 20:17:07.331774 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:07.331863 kubelet[2436]: E0213 20:17:07.331805 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:09.866993 kubelet[2436]: E0213 20:17:09.866924 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:12.619469 kubelet[2436]: I0213 20:17:12.619389 2436 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:17:12.619847 containerd[1438]: time="2025-02-13T20:17:12.619755495Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:17:12.620031 kubelet[2436]: I0213 20:17:12.619971 2436 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:17:13.320175 systemd[1]: Created slice kubepods-besteffort-pod109dd675_a3d5_4f1d_be78_84b9d8a38545.slice - libcontainer container kubepods-besteffort-pod109dd675_a3d5_4f1d_be78_84b9d8a38545.slice. Feb 13 20:17:13.351861 kubelet[2436]: I0213 20:17:13.351828 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/71c6d29d-8b7d-4b8f-92a3-710fe670a99c-run\") pod \"kube-flannel-ds-l2mj5\" (UID: \"71c6d29d-8b7d-4b8f-92a3-710fe670a99c\") " pod="kube-flannel/kube-flannel-ds-l2mj5" Feb 13 20:17:13.352146 kubelet[2436]: I0213 20:17:13.352010 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvd57\" (UniqueName: \"kubernetes.io/projected/71c6d29d-8b7d-4b8f-92a3-710fe670a99c-kube-api-access-pvd57\") pod \"kube-flannel-ds-l2mj5\" (UID: \"71c6d29d-8b7d-4b8f-92a3-710fe670a99c\") " pod="kube-flannel/kube-flannel-ds-l2mj5" Feb 13 20:17:13.352146 kubelet[2436]: I0213 20:17:13.352039 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/109dd675-a3d5-4f1d-be78-84b9d8a38545-kube-proxy\") pod \"kube-proxy-bcklv\" (UID: \"109dd675-a3d5-4f1d-be78-84b9d8a38545\") " pod="kube-system/kube-proxy-bcklv" Feb 13 20:17:13.352146 kubelet[2436]: I0213 20:17:13.352074 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/109dd675-a3d5-4f1d-be78-84b9d8a38545-xtables-lock\") pod \"kube-proxy-bcklv\" (UID: \"109dd675-a3d5-4f1d-be78-84b9d8a38545\") " pod="kube-system/kube-proxy-bcklv" Feb 13 20:17:13.352146 kubelet[2436]: I0213 20:17:13.352093 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/71c6d29d-8b7d-4b8f-92a3-710fe670a99c-cni\") pod \"kube-flannel-ds-l2mj5\" (UID: \"71c6d29d-8b7d-4b8f-92a3-710fe670a99c\") " pod="kube-flannel/kube-flannel-ds-l2mj5" Feb 13 20:17:13.352146 kubelet[2436]: I0213 20:17:13.352107 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71c6d29d-8b7d-4b8f-92a3-710fe670a99c-xtables-lock\") pod \"kube-flannel-ds-l2mj5\" (UID: \"71c6d29d-8b7d-4b8f-92a3-710fe670a99c\") " pod="kube-flannel/kube-flannel-ds-l2mj5" Feb 13 20:17:13.352300 kubelet[2436]: I0213 20:17:13.352124 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/109dd675-a3d5-4f1d-be78-84b9d8a38545-lib-modules\") pod \"kube-proxy-bcklv\" (UID: \"109dd675-a3d5-4f1d-be78-84b9d8a38545\") " pod="kube-system/kube-proxy-bcklv" Feb 13 20:17:13.352496 kubelet[2436]: I0213 20:17:13.352373 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9468t\" (UniqueName: \"kubernetes.io/projected/109dd675-a3d5-4f1d-be78-84b9d8a38545-kube-api-access-9468t\") pod \"kube-proxy-bcklv\" (UID: \"109dd675-a3d5-4f1d-be78-84b9d8a38545\") " pod="kube-system/kube-proxy-bcklv" Feb 13 20:17:13.352496 kubelet[2436]: I0213 20:17:13.352412 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/71c6d29d-8b7d-4b8f-92a3-710fe670a99c-cni-plugin\") pod \"kube-flannel-ds-l2mj5\" (UID: \"71c6d29d-8b7d-4b8f-92a3-710fe670a99c\") " pod="kube-flannel/kube-flannel-ds-l2mj5" Feb 13 20:17:13.352496 kubelet[2436]: I0213 20:17:13.352444 2436 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/71c6d29d-8b7d-4b8f-92a3-710fe670a99c-flannel-cfg\") pod \"kube-flannel-ds-l2mj5\" (UID: \"71c6d29d-8b7d-4b8f-92a3-710fe670a99c\") " pod="kube-flannel/kube-flannel-ds-l2mj5" Feb 13 20:17:13.353407 systemd[1]: Created slice kubepods-burstable-pod71c6d29d_8b7d_4b8f_92a3_710fe670a99c.slice - libcontainer container kubepods-burstable-pod71c6d29d_8b7d_4b8f_92a3_710fe670a99c.slice. Feb 13 20:17:13.583786 kubelet[2436]: E0213 20:17:13.581857 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:13.650791 kubelet[2436]: E0213 20:17:13.650754 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:13.651740 containerd[1438]: time="2025-02-13T20:17:13.651322187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bcklv,Uid:109dd675-a3d5-4f1d-be78-84b9d8a38545,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:13.655754 kubelet[2436]: E0213 20:17:13.655730 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:13.656172 containerd[1438]: time="2025-02-13T20:17:13.656136512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-l2mj5,Uid:71c6d29d-8b7d-4b8f-92a3-710fe670a99c,Namespace:kube-flannel,Attempt:0,}" Feb 13 20:17:13.718214 containerd[1438]: time="2025-02-13T20:17:13.718097704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:13.718214 containerd[1438]: time="2025-02-13T20:17:13.718175512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:13.718402 containerd[1438]: time="2025-02-13T20:17:13.718186713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:13.718402 containerd[1438]: time="2025-02-13T20:17:13.718293844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:13.719441 containerd[1438]: time="2025-02-13T20:17:13.719372033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:13.719525 containerd[1438]: time="2025-02-13T20:17:13.719427598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:13.719525 containerd[1438]: time="2025-02-13T20:17:13.719445520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:13.719604 containerd[1438]: time="2025-02-13T20:17:13.719526968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:13.738856 systemd[1]: Started cri-containerd-4457d28791c7e24ab9fe07ae3637f34dd4ebaf67be5be263d2dd1a9e44f816f3.scope - libcontainer container 4457d28791c7e24ab9fe07ae3637f34dd4ebaf67be5be263d2dd1a9e44f816f3. Feb 13 20:17:13.741477 systemd[1]: Started cri-containerd-770f0f217698d1518c2f5259636017a15b4051593c3ec556e09d749923ae5cd6.scope - libcontainer container 770f0f217698d1518c2f5259636017a15b4051593c3ec556e09d749923ae5cd6. Feb 13 20:17:13.761968 containerd[1438]: time="2025-02-13T20:17:13.761877948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bcklv,Uid:109dd675-a3d5-4f1d-be78-84b9d8a38545,Namespace:kube-system,Attempt:0,} returns sandbox id \"4457d28791c7e24ab9fe07ae3637f34dd4ebaf67be5be263d2dd1a9e44f816f3\"" Feb 13 20:17:13.762739 kubelet[2436]: E0213 20:17:13.762672 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:13.766907 containerd[1438]: time="2025-02-13T20:17:13.766820165Z" level=info msg="CreateContainer within sandbox \"4457d28791c7e24ab9fe07ae3637f34dd4ebaf67be5be263d2dd1a9e44f816f3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:17:13.775701 containerd[1438]: time="2025-02-13T20:17:13.775604289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-l2mj5,Uid:71c6d29d-8b7d-4b8f-92a3-710fe670a99c,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"770f0f217698d1518c2f5259636017a15b4051593c3ec556e09d749923ae5cd6\"" Feb 13 20:17:13.776367 kubelet[2436]: E0213 20:17:13.776256 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:13.779144 containerd[1438]: time="2025-02-13T20:17:13.779073598Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:17:13.783274 containerd[1438]: time="2025-02-13T20:17:13.783235857Z" level=info msg="CreateContainer within sandbox \"4457d28791c7e24ab9fe07ae3637f34dd4ebaf67be5be263d2dd1a9e44f816f3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"64efa431cdc7ef856a655d3c7e5f6de5c75fb6ba03967f227fdef48d576fee49\"" Feb 13 20:17:13.784351 containerd[1438]: time="2025-02-13T20:17:13.784319686Z" level=info msg="StartContainer for \"64efa431cdc7ef856a655d3c7e5f6de5c75fb6ba03967f227fdef48d576fee49\"" Feb 13 20:17:13.809890 systemd[1]: Started cri-containerd-64efa431cdc7ef856a655d3c7e5f6de5c75fb6ba03967f227fdef48d576fee49.scope - libcontainer container 64efa431cdc7ef856a655d3c7e5f6de5c75fb6ba03967f227fdef48d576fee49. Feb 13 20:17:13.835168 containerd[1438]: time="2025-02-13T20:17:13.835046509Z" level=info msg="StartContainer for \"64efa431cdc7ef856a655d3c7e5f6de5c75fb6ba03967f227fdef48d576fee49\" returns successfully" Feb 13 20:17:14.343584 kubelet[2436]: E0213 20:17:14.343164 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:14.344431 kubelet[2436]: E0213 20:17:14.343761 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:14.352270 kubelet[2436]: I0213 20:17:14.352213 2436 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bcklv" podStartSLOduration=1.3521997049999999 podStartE2EDuration="1.352199705s" podCreationTimestamp="2025-02-13 20:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:14.352029449 +0000 UTC m=+9.127676658" watchObservedRunningTime="2025-02-13 20:17:14.352199705 +0000 UTC m=+9.127846914" Feb 13 20:17:14.888643 containerd[1438]: time="2025-02-13T20:17:14.888575814Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:17:14.888643 containerd[1438]: time="2025-02-13T20:17:14.888621459Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:17:14.889478 kubelet[2436]: E0213 20:17:14.888786 2436 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:17:14.889478 kubelet[2436]: E0213 20:17:14.888839 2436 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:17:14.889754 kubelet[2436]: E0213 20:17:14.889015 2436 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvd57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-l2mj5_kube-flannel(71c6d29d-8b7d-4b8f-92a3-710fe670a99c): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:17:14.891069 kubelet[2436]: E0213 20:17:14.891037 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:17:15.344776 kubelet[2436]: E0213 20:17:15.344612 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:15.345537 kubelet[2436]: E0213 20:17:15.345476 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:17:16.132726 update_engine[1428]: I20250213 20:17:16.132625 1428 update_attempter.cc:509] Updating boot flags... Feb 13 20:17:16.153784 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2760) Feb 13 20:17:16.168860 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2762) Feb 13 20:17:16.174021 kubelet[2436]: E0213 20:17:16.173645 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:16.206966 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2762) Feb 13 20:17:19.876169 kubelet[2436]: E0213 20:17:19.876123 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:28.309384 kubelet[2436]: E0213 20:17:28.309328 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:28.311025 containerd[1438]: time="2025-02-13T20:17:28.310997371Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:17:29.432227 containerd[1438]: time="2025-02-13T20:17:29.432129713Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:17:29.432227 containerd[1438]: time="2025-02-13T20:17:29.432204397Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11109" Feb 13 20:17:29.433883 kubelet[2436]: E0213 20:17:29.432385 2436 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:17:29.433883 kubelet[2436]: E0213 20:17:29.432438 2436 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:17:29.434160 kubelet[2436]: E0213 20:17:29.432543 2436 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvd57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-l2mj5_kube-flannel(71c6d29d-8b7d-4b8f-92a3-710fe670a99c): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:17:29.434212 kubelet[2436]: E0213 20:17:29.433782 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:17:30.430304 systemd[1]: Started sshd@5-10.0.0.8:22-10.0.0.1:53636.service - OpenSSH per-connection server daemon (10.0.0.1:53636). Feb 13 20:17:30.466134 sshd[2771]: Accepted publickey for core from 10.0.0.1 port 53636 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:30.467548 sshd[2771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:30.471437 systemd-logind[1421]: New session 6 of user core. Feb 13 20:17:30.479922 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:17:30.596072 sshd[2771]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:30.598536 systemd[1]: sshd@5-10.0.0.8:22-10.0.0.1:53636.service: Deactivated successfully. Feb 13 20:17:30.601135 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:17:30.602258 systemd-logind[1421]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:17:30.603745 systemd-logind[1421]: Removed session 6. Feb 13 20:17:35.607736 systemd[1]: Started sshd@6-10.0.0.8:22-10.0.0.1:53928.service - OpenSSH per-connection server daemon (10.0.0.1:53928). Feb 13 20:17:35.642588 sshd[2790]: Accepted publickey for core from 10.0.0.1 port 53928 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:35.643777 sshd[2790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:35.647746 systemd-logind[1421]: New session 7 of user core. Feb 13 20:17:35.658889 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:17:35.763131 sshd[2790]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:35.766138 systemd[1]: sshd@6-10.0.0.8:22-10.0.0.1:53928.service: Deactivated successfully. Feb 13 20:17:35.768185 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:17:35.768754 systemd-logind[1421]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:17:35.769617 systemd-logind[1421]: Removed session 7. Feb 13 20:17:40.774091 systemd[1]: Started sshd@7-10.0.0.8:22-10.0.0.1:53938.service - OpenSSH per-connection server daemon (10.0.0.1:53938). Feb 13 20:17:40.813922 sshd[2806]: Accepted publickey for core from 10.0.0.1 port 53938 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:40.815047 sshd[2806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:40.818630 systemd-logind[1421]: New session 8 of user core. Feb 13 20:17:40.825991 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:17:40.936949 sshd[2806]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:40.939259 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:17:40.940426 systemd[1]: sshd@7-10.0.0.8:22-10.0.0.1:53938.service: Deactivated successfully. Feb 13 20:17:40.942855 systemd-logind[1421]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:17:40.943605 systemd-logind[1421]: Removed session 8. Feb 13 20:17:42.309460 kubelet[2436]: E0213 20:17:42.309303 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:42.310474 kubelet[2436]: E0213 20:17:42.310235 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:17:45.947243 systemd[1]: Started sshd@8-10.0.0.8:22-10.0.0.1:38908.service - OpenSSH per-connection server daemon (10.0.0.1:38908). Feb 13 20:17:45.983356 sshd[2825]: Accepted publickey for core from 10.0.0.1 port 38908 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:45.984802 sshd[2825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:45.988146 systemd-logind[1421]: New session 9 of user core. Feb 13 20:17:45.997916 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:17:46.107043 sshd[2825]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:46.109958 systemd[1]: sshd@8-10.0.0.8:22-10.0.0.1:38908.service: Deactivated successfully. Feb 13 20:17:46.111555 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:17:46.112132 systemd-logind[1421]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:17:46.112960 systemd-logind[1421]: Removed session 9. Feb 13 20:17:51.117191 systemd[1]: Started sshd@9-10.0.0.8:22-10.0.0.1:38918.service - OpenSSH per-connection server daemon (10.0.0.1:38918). Feb 13 20:17:51.152513 sshd[2841]: Accepted publickey for core from 10.0.0.1 port 38918 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:51.153816 sshd[2841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:51.157352 systemd-logind[1421]: New session 10 of user core. Feb 13 20:17:51.171843 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:17:51.276228 sshd[2841]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:51.279499 systemd[1]: sshd@9-10.0.0.8:22-10.0.0.1:38918.service: Deactivated successfully. Feb 13 20:17:51.281099 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:17:51.282355 systemd-logind[1421]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:17:51.283484 systemd-logind[1421]: Removed session 10. Feb 13 20:17:56.287206 systemd[1]: Started sshd@10-10.0.0.8:22-10.0.0.1:51024.service - OpenSSH per-connection server daemon (10.0.0.1:51024). Feb 13 20:17:56.309172 kubelet[2436]: E0213 20:17:56.309138 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:56.311394 containerd[1438]: time="2025-02-13T20:17:56.311083000Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:17:56.322117 sshd[2857]: Accepted publickey for core from 10.0.0.1 port 51024 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:56.323413 sshd[2857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:56.327136 systemd-logind[1421]: New session 11 of user core. Feb 13 20:17:56.334872 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:17:56.444642 sshd[2857]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:56.447767 systemd[1]: sshd@10-10.0.0.8:22-10.0.0.1:51024.service: Deactivated successfully. Feb 13 20:17:56.449431 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:17:56.450087 systemd-logind[1421]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:17:56.450994 systemd-logind[1421]: Removed session 11. Feb 13 20:17:57.471851 containerd[1438]: time="2025-02-13T20:17:57.471791321Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:17:57.472245 containerd[1438]: time="2025-02-13T20:17:57.471928484Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:17:57.472277 kubelet[2436]: E0213 20:17:57.471975 2436 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:17:57.472277 kubelet[2436]: E0213 20:17:57.472028 2436 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:17:57.472528 kubelet[2436]: E0213 20:17:57.472111 2436 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvd57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-l2mj5_kube-flannel(71c6d29d-8b7d-4b8f-92a3-710fe670a99c): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:17:57.473300 kubelet[2436]: E0213 20:17:57.473271 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:18:01.455415 systemd[1]: Started sshd@11-10.0.0.8:22-10.0.0.1:51036.service - OpenSSH per-connection server daemon (10.0.0.1:51036). Feb 13 20:18:01.490302 sshd[2873]: Accepted publickey for core from 10.0.0.1 port 51036 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:01.491495 sshd[2873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:01.495419 systemd-logind[1421]: New session 12 of user core. Feb 13 20:18:01.507866 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:18:01.613669 sshd[2873]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:01.617415 systemd[1]: sshd@11-10.0.0.8:22-10.0.0.1:51036.service: Deactivated successfully. Feb 13 20:18:01.620220 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:18:01.620827 systemd-logind[1421]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:18:01.621639 systemd-logind[1421]: Removed session 12. Feb 13 20:18:06.624244 systemd[1]: Started sshd@12-10.0.0.8:22-10.0.0.1:41394.service - OpenSSH per-connection server daemon (10.0.0.1:41394). Feb 13 20:18:06.660442 sshd[2890]: Accepted publickey for core from 10.0.0.1 port 41394 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:06.661736 sshd[2890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:06.665942 systemd-logind[1421]: New session 13 of user core. Feb 13 20:18:06.680885 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:18:06.787282 sshd[2890]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:06.791169 systemd[1]: sshd@12-10.0.0.8:22-10.0.0.1:41394.service: Deactivated successfully. Feb 13 20:18:06.794145 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:18:06.794722 systemd-logind[1421]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:18:06.795637 systemd-logind[1421]: Removed session 13. Feb 13 20:18:09.309406 kubelet[2436]: E0213 20:18:09.309354 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:09.314406 kubelet[2436]: E0213 20:18:09.314358 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:18:11.797244 systemd[1]: Started sshd@13-10.0.0.8:22-10.0.0.1:41398.service - OpenSSH per-connection server daemon (10.0.0.1:41398). Feb 13 20:18:11.832104 sshd[2906]: Accepted publickey for core from 10.0.0.1 port 41398 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:11.833273 sshd[2906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:11.836999 systemd-logind[1421]: New session 14 of user core. Feb 13 20:18:11.853868 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:18:11.961520 sshd[2906]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:11.964567 systemd[1]: sshd@13-10.0.0.8:22-10.0.0.1:41398.service: Deactivated successfully. Feb 13 20:18:11.967342 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:18:11.968105 systemd-logind[1421]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:18:11.969294 systemd-logind[1421]: Removed session 14. Feb 13 20:18:16.975163 systemd[1]: Started sshd@14-10.0.0.8:22-10.0.0.1:58870.service - OpenSSH per-connection server daemon (10.0.0.1:58870). Feb 13 20:18:17.011826 sshd[2924]: Accepted publickey for core from 10.0.0.1 port 58870 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:17.013086 sshd[2924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:17.016757 systemd-logind[1421]: New session 15 of user core. Feb 13 20:18:17.031935 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:18:17.134950 sshd[2924]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:17.138496 systemd[1]: sshd@14-10.0.0.8:22-10.0.0.1:58870.service: Deactivated successfully. Feb 13 20:18:17.140202 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:18:17.140761 systemd-logind[1421]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:18:17.141507 systemd-logind[1421]: Removed session 15. Feb 13 20:18:22.147168 systemd[1]: Started sshd@15-10.0.0.8:22-10.0.0.1:58880.service - OpenSSH per-connection server daemon (10.0.0.1:58880). Feb 13 20:18:22.182721 sshd[2939]: Accepted publickey for core from 10.0.0.1 port 58880 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:22.183904 sshd[2939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:22.187498 systemd-logind[1421]: New session 16 of user core. Feb 13 20:18:22.194929 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:18:22.297174 sshd[2939]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:22.300858 systemd[1]: sshd@15-10.0.0.8:22-10.0.0.1:58880.service: Deactivated successfully. Feb 13 20:18:22.303291 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:18:22.304255 systemd-logind[1421]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:18:22.305105 systemd-logind[1421]: Removed session 16. Feb 13 20:18:23.309139 kubelet[2436]: E0213 20:18:23.308961 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:23.309983 kubelet[2436]: E0213 20:18:23.309905 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:18:27.308085 systemd[1]: Started sshd@16-10.0.0.8:22-10.0.0.1:42146.service - OpenSSH per-connection server daemon (10.0.0.1:42146). Feb 13 20:18:27.343493 sshd[2954]: Accepted publickey for core from 10.0.0.1 port 42146 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:27.344633 sshd[2954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:27.348758 systemd-logind[1421]: New session 17 of user core. Feb 13 20:18:27.357862 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:18:27.463504 sshd[2954]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:27.466073 systemd[1]: sshd@16-10.0.0.8:22-10.0.0.1:42146.service: Deactivated successfully. Feb 13 20:18:27.467668 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:18:27.468999 systemd-logind[1421]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:18:27.470198 systemd-logind[1421]: Removed session 17. Feb 13 20:18:32.309263 kubelet[2436]: E0213 20:18:32.309177 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:32.474175 systemd[1]: Started sshd@17-10.0.0.8:22-10.0.0.1:33820.service - OpenSSH per-connection server daemon (10.0.0.1:33820). Feb 13 20:18:32.509263 sshd[2969]: Accepted publickey for core from 10.0.0.1 port 33820 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:32.510439 sshd[2969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:32.514614 systemd-logind[1421]: New session 18 of user core. Feb 13 20:18:32.520879 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:18:32.624388 sshd[2969]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:32.627579 systemd[1]: sshd@17-10.0.0.8:22-10.0.0.1:33820.service: Deactivated successfully. Feb 13 20:18:32.629309 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:18:32.629997 systemd-logind[1421]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:18:32.631053 systemd-logind[1421]: Removed session 18. Feb 13 20:18:37.309006 kubelet[2436]: E0213 20:18:37.308973 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:37.637205 systemd[1]: Started sshd@18-10.0.0.8:22-10.0.0.1:33836.service - OpenSSH per-connection server daemon (10.0.0.1:33836). Feb 13 20:18:37.672089 sshd[2984]: Accepted publickey for core from 10.0.0.1 port 33836 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:37.673271 sshd[2984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:37.676896 systemd-logind[1421]: New session 19 of user core. Feb 13 20:18:37.690857 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:18:37.796748 sshd[2984]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:37.799934 systemd[1]: sshd@18-10.0.0.8:22-10.0.0.1:33836.service: Deactivated successfully. Feb 13 20:18:37.801529 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:18:37.802129 systemd-logind[1421]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:18:37.802994 systemd-logind[1421]: Removed session 19. Feb 13 20:18:38.308845 kubelet[2436]: E0213 20:18:38.308811 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:38.309905 containerd[1438]: time="2025-02-13T20:18:38.309699743Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:18:39.419973 containerd[1438]: time="2025-02-13T20:18:39.419904772Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:18:39.420322 containerd[1438]: time="2025-02-13T20:18:39.419994173Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:18:39.420367 kubelet[2436]: E0213 20:18:39.420117 2436 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:18:39.420367 kubelet[2436]: E0213 20:18:39.420198 2436 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:18:39.420598 kubelet[2436]: E0213 20:18:39.420327 2436 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvd57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-l2mj5_kube-flannel(71c6d29d-8b7d-4b8f-92a3-710fe670a99c): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:18:39.421509 kubelet[2436]: E0213 20:18:39.421456 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:18:41.309582 kubelet[2436]: E0213 20:18:41.309546 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:42.810268 systemd[1]: Started sshd@19-10.0.0.8:22-10.0.0.1:42710.service - OpenSSH per-connection server daemon (10.0.0.1:42710). Feb 13 20:18:42.845238 sshd[2999]: Accepted publickey for core from 10.0.0.1 port 42710 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:42.846378 sshd[2999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:42.850403 systemd-logind[1421]: New session 20 of user core. Feb 13 20:18:42.858928 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:18:42.964940 sshd[2999]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:42.967337 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:18:42.968560 systemd[1]: sshd@19-10.0.0.8:22-10.0.0.1:42710.service: Deactivated successfully. Feb 13 20:18:42.970473 systemd-logind[1421]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:18:42.971342 systemd-logind[1421]: Removed session 20. Feb 13 20:18:44.308852 kubelet[2436]: E0213 20:18:44.308800 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:47.975188 systemd[1]: Started sshd@20-10.0.0.8:22-10.0.0.1:42718.service - OpenSSH per-connection server daemon (10.0.0.1:42718). Feb 13 20:18:48.010653 sshd[3017]: Accepted publickey for core from 10.0.0.1 port 42718 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:48.011866 sshd[3017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:48.015677 systemd-logind[1421]: New session 21 of user core. Feb 13 20:18:48.026858 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:18:48.131462 sshd[3017]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:48.133965 systemd[1]: sshd@20-10.0.0.8:22-10.0.0.1:42718.service: Deactivated successfully. Feb 13 20:18:48.135561 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:18:48.136794 systemd-logind[1421]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:18:48.137863 systemd-logind[1421]: Removed session 21. Feb 13 20:18:53.143227 systemd[1]: Started sshd@21-10.0.0.8:22-10.0.0.1:41590.service - OpenSSH per-connection server daemon (10.0.0.1:41590). Feb 13 20:18:53.178466 sshd[3033]: Accepted publickey for core from 10.0.0.1 port 41590 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:53.179637 sshd[3033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:53.183332 systemd-logind[1421]: New session 22 of user core. Feb 13 20:18:53.192957 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:18:53.297470 sshd[3033]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:53.300939 systemd[1]: sshd@21-10.0.0.8:22-10.0.0.1:41590.service: Deactivated successfully. Feb 13 20:18:53.302566 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:18:53.303202 systemd-logind[1421]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:18:53.303982 systemd-logind[1421]: Removed session 22. Feb 13 20:18:54.309439 kubelet[2436]: E0213 20:18:54.309386 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:54.311453 kubelet[2436]: E0213 20:18:54.311413 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:18:58.308384 systemd[1]: Started sshd@22-10.0.0.8:22-10.0.0.1:41606.service - OpenSSH per-connection server daemon (10.0.0.1:41606). Feb 13 20:18:58.343865 sshd[3048]: Accepted publickey for core from 10.0.0.1 port 41606 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:58.345118 sshd[3048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:58.348547 systemd-logind[1421]: New session 23 of user core. Feb 13 20:18:58.354859 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:18:58.463825 sshd[3048]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:58.466778 systemd[1]: sshd@22-10.0.0.8:22-10.0.0.1:41606.service: Deactivated successfully. Feb 13 20:18:58.468355 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:18:58.469623 systemd-logind[1421]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:18:58.470644 systemd-logind[1421]: Removed session 23. Feb 13 20:19:03.474387 systemd[1]: Started sshd@23-10.0.0.8:22-10.0.0.1:41156.service - OpenSSH per-connection server daemon (10.0.0.1:41156). Feb 13 20:19:03.510747 sshd[3064]: Accepted publickey for core from 10.0.0.1 port 41156 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:03.511910 sshd[3064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:03.515903 systemd-logind[1421]: New session 24 of user core. Feb 13 20:19:03.533869 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:19:03.640979 sshd[3064]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:03.643569 systemd[1]: sshd@23-10.0.0.8:22-10.0.0.1:41156.service: Deactivated successfully. Feb 13 20:19:03.646028 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:19:03.647557 systemd-logind[1421]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:19:03.648439 systemd-logind[1421]: Removed session 24. Feb 13 20:19:05.309524 kubelet[2436]: E0213 20:19:05.309462 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:05.310853 kubelet[2436]: E0213 20:19:05.310813 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:19:05.379464 kubelet[2436]: E0213 20:19:05.379431 2436 kubelet_node_status.go:461] "Node not becoming ready in time after startup" Feb 13 20:19:08.651197 systemd[1]: Started sshd@24-10.0.0.8:22-10.0.0.1:41158.service - OpenSSH per-connection server daemon (10.0.0.1:41158). Feb 13 20:19:08.686253 sshd[3081]: Accepted publickey for core from 10.0.0.1 port 41158 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:08.687444 sshd[3081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:08.690950 systemd-logind[1421]: New session 25 of user core. Feb 13 20:19:08.700851 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:19:08.804821 sshd[3081]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:08.808319 systemd[1]: sshd@24-10.0.0.8:22-10.0.0.1:41158.service: Deactivated successfully. Feb 13 20:19:08.809867 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:19:08.810432 systemd-logind[1421]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:19:08.811242 systemd-logind[1421]: Removed session 25. Feb 13 20:19:10.374571 kubelet[2436]: E0213 20:19:10.374527 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:13.815198 systemd[1]: Started sshd@25-10.0.0.8:22-10.0.0.1:36494.service - OpenSSH per-connection server daemon (10.0.0.1:36494). Feb 13 20:19:13.850726 sshd[3097]: Accepted publickey for core from 10.0.0.1 port 36494 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:13.852034 sshd[3097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:13.855321 systemd-logind[1421]: New session 26 of user core. Feb 13 20:19:13.861926 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:19:13.966965 sshd[3097]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:13.970064 systemd[1]: sshd@25-10.0.0.8:22-10.0.0.1:36494.service: Deactivated successfully. Feb 13 20:19:13.971613 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:19:13.972307 systemd-logind[1421]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:19:13.973173 systemd-logind[1421]: Removed session 26. Feb 13 20:19:15.375546 kubelet[2436]: E0213 20:19:15.375491 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:16.309512 kubelet[2436]: E0213 20:19:16.309422 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:16.310058 kubelet[2436]: E0213 20:19:16.310018 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:19:18.977180 systemd[1]: Started sshd@26-10.0.0.8:22-10.0.0.1:36496.service - OpenSSH per-connection server daemon (10.0.0.1:36496). Feb 13 20:19:19.013702 sshd[3114]: Accepted publickey for core from 10.0.0.1 port 36496 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:19.014860 sshd[3114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:19.018380 systemd-logind[1421]: New session 27 of user core. Feb 13 20:19:19.024912 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:19:19.127762 sshd[3114]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:19.130999 systemd[1]: sshd@26-10.0.0.8:22-10.0.0.1:36496.service: Deactivated successfully. Feb 13 20:19:19.132544 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:19:19.133156 systemd-logind[1421]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:19:19.134125 systemd-logind[1421]: Removed session 27. Feb 13 20:19:20.376780 kubelet[2436]: E0213 20:19:20.376676 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:24.139247 systemd[1]: Started sshd@27-10.0.0.8:22-10.0.0.1:36306.service - OpenSSH per-connection server daemon (10.0.0.1:36306). Feb 13 20:19:24.174422 sshd[3129]: Accepted publickey for core from 10.0.0.1 port 36306 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:24.176372 sshd[3129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:24.180605 systemd-logind[1421]: New session 28 of user core. Feb 13 20:19:24.193872 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:19:24.298736 sshd[3129]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:24.301812 systemd[1]: sshd@27-10.0.0.8:22-10.0.0.1:36306.service: Deactivated successfully. Feb 13 20:19:24.304062 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:19:24.304562 systemd-logind[1421]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:19:24.305369 systemd-logind[1421]: Removed session 28. Feb 13 20:19:25.378218 kubelet[2436]: E0213 20:19:25.378177 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:27.311233 kubelet[2436]: E0213 20:19:27.311193 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:27.311838 kubelet[2436]: E0213 20:19:27.311798 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:19:29.314163 systemd[1]: Started sshd@28-10.0.0.8:22-10.0.0.1:36312.service - OpenSSH per-connection server daemon (10.0.0.1:36312). Feb 13 20:19:29.349172 sshd[3145]: Accepted publickey for core from 10.0.0.1 port 36312 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:29.350359 sshd[3145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:29.353830 systemd-logind[1421]: New session 29 of user core. Feb 13 20:19:29.363925 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:19:29.473556 sshd[3145]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:29.477539 systemd[1]: sshd@28-10.0.0.8:22-10.0.0.1:36312.service: Deactivated successfully. Feb 13 20:19:29.479329 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:19:29.481197 systemd-logind[1421]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:19:29.482095 systemd-logind[1421]: Removed session 29. Feb 13 20:19:30.379006 kubelet[2436]: E0213 20:19:30.378951 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:34.487277 systemd[1]: Started sshd@29-10.0.0.8:22-10.0.0.1:43846.service - OpenSSH per-connection server daemon (10.0.0.1:43846). Feb 13 20:19:34.522689 sshd[3160]: Accepted publickey for core from 10.0.0.1 port 43846 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:34.523953 sshd[3160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:34.527319 systemd-logind[1421]: New session 30 of user core. Feb 13 20:19:34.536877 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 20:19:34.639952 sshd[3160]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:34.643101 systemd[1]: sshd@29-10.0.0.8:22-10.0.0.1:43846.service: Deactivated successfully. Feb 13 20:19:34.646144 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 20:19:34.646697 systemd-logind[1421]: Session 30 logged out. Waiting for processes to exit. Feb 13 20:19:34.647409 systemd-logind[1421]: Removed session 30. Feb 13 20:19:35.379943 kubelet[2436]: E0213 20:19:35.379896 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:38.309332 kubelet[2436]: E0213 20:19:38.309288 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:38.310109 kubelet[2436]: E0213 20:19:38.310073 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:19:39.650264 systemd[1]: Started sshd@30-10.0.0.8:22-10.0.0.1:43854.service - OpenSSH per-connection server daemon (10.0.0.1:43854). Feb 13 20:19:39.685259 sshd[3175]: Accepted publickey for core from 10.0.0.1 port 43854 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:39.686532 sshd[3175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:39.690427 systemd-logind[1421]: New session 31 of user core. Feb 13 20:19:39.703858 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 20:19:39.810756 sshd[3175]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:39.814168 systemd[1]: sshd@30-10.0.0.8:22-10.0.0.1:43854.service: Deactivated successfully. Feb 13 20:19:39.815903 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 20:19:39.816668 systemd-logind[1421]: Session 31 logged out. Waiting for processes to exit. Feb 13 20:19:39.817429 systemd-logind[1421]: Removed session 31. Feb 13 20:19:40.381282 kubelet[2436]: E0213 20:19:40.381244 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:44.308992 kubelet[2436]: E0213 20:19:44.308955 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:44.821211 systemd[1]: Started sshd@31-10.0.0.8:22-10.0.0.1:54228.service - OpenSSH per-connection server daemon (10.0.0.1:54228). Feb 13 20:19:44.856876 sshd[3193]: Accepted publickey for core from 10.0.0.1 port 54228 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:44.858101 sshd[3193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:44.861505 systemd-logind[1421]: New session 32 of user core. Feb 13 20:19:44.879903 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 20:19:44.985253 sshd[3193]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:44.988195 systemd[1]: sshd@31-10.0.0.8:22-10.0.0.1:54228.service: Deactivated successfully. Feb 13 20:19:44.991095 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 20:19:44.991650 systemd-logind[1421]: Session 32 logged out. Waiting for processes to exit. Feb 13 20:19:44.992491 systemd-logind[1421]: Removed session 32. Feb 13 20:19:45.382875 kubelet[2436]: E0213 20:19:45.382828 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:49.999196 systemd[1]: Started sshd@32-10.0.0.8:22-10.0.0.1:54238.service - OpenSSH per-connection server daemon (10.0.0.1:54238). Feb 13 20:19:50.034436 sshd[3209]: Accepted publickey for core from 10.0.0.1 port 54238 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:50.035616 sshd[3209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:50.038870 systemd-logind[1421]: New session 33 of user core. Feb 13 20:19:50.051861 systemd[1]: Started session-33.scope - Session 33 of User core. Feb 13 20:19:50.155663 sshd[3209]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:50.158896 systemd[1]: sshd@32-10.0.0.8:22-10.0.0.1:54238.service: Deactivated successfully. Feb 13 20:19:50.161260 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 20:19:50.161992 systemd-logind[1421]: Session 33 logged out. Waiting for processes to exit. Feb 13 20:19:50.162992 systemd-logind[1421]: Removed session 33. Feb 13 20:19:50.383733 kubelet[2436]: E0213 20:19:50.383596 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:52.309664 kubelet[2436]: E0213 20:19:52.309583 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:52.310389 kubelet[2436]: E0213 20:19:52.310329 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:19:55.166253 systemd[1]: Started sshd@33-10.0.0.8:22-10.0.0.1:48208.service - OpenSSH per-connection server daemon (10.0.0.1:48208). Feb 13 20:19:55.201331 sshd[3225]: Accepted publickey for core from 10.0.0.1 port 48208 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:55.202484 sshd[3225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:55.206144 systemd-logind[1421]: New session 34 of user core. Feb 13 20:19:55.215848 systemd[1]: Started session-34.scope - Session 34 of User core. Feb 13 20:19:55.321555 sshd[3225]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:55.325076 systemd[1]: sshd@33-10.0.0.8:22-10.0.0.1:48208.service: Deactivated successfully. Feb 13 20:19:55.326853 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 20:19:55.327401 systemd-logind[1421]: Session 34 logged out. Waiting for processes to exit. Feb 13 20:19:55.328324 systemd-logind[1421]: Removed session 34. Feb 13 20:19:55.384295 kubelet[2436]: E0213 20:19:55.384260 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:57.309290 kubelet[2436]: E0213 20:19:57.309253 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:58.309029 kubelet[2436]: E0213 20:19:58.308985 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:00.332338 systemd[1]: Started sshd@34-10.0.0.8:22-10.0.0.1:48210.service - OpenSSH per-connection server daemon (10.0.0.1:48210). Feb 13 20:20:00.367437 sshd[3241]: Accepted publickey for core from 10.0.0.1 port 48210 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:00.368565 sshd[3241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:00.372308 systemd-logind[1421]: New session 35 of user core. Feb 13 20:20:00.381952 systemd[1]: Started session-35.scope - Session 35 of User core. Feb 13 20:20:00.385108 kubelet[2436]: E0213 20:20:00.385070 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:00.488118 sshd[3241]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:00.491148 systemd[1]: sshd@34-10.0.0.8:22-10.0.0.1:48210.service: Deactivated successfully. Feb 13 20:20:00.493007 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 20:20:00.493621 systemd-logind[1421]: Session 35 logged out. Waiting for processes to exit. Feb 13 20:20:00.494544 systemd-logind[1421]: Removed session 35. Feb 13 20:20:03.310381 kubelet[2436]: E0213 20:20:03.310331 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:03.311039 containerd[1438]: time="2025-02-13T20:20:03.310992298Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:20:04.308738 kubelet[2436]: E0213 20:20:04.308646 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:04.425658 containerd[1438]: time="2025-02-13T20:20:04.425577892Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:20:04.425658 containerd[1438]: time="2025-02-13T20:20:04.425618372Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:20:04.426044 kubelet[2436]: E0213 20:20:04.425803 2436 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:20:04.426044 kubelet[2436]: E0213 20:20:04.425848 2436 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:20:04.426259 kubelet[2436]: E0213 20:20:04.425945 2436 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvd57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-l2mj5_kube-flannel(71c6d29d-8b7d-4b8f-92a3-710fe670a99c): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:20:04.427416 kubelet[2436]: E0213 20:20:04.427245 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:20:05.386368 kubelet[2436]: E0213 20:20:05.386334 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:05.502035 systemd[1]: Started sshd@35-10.0.0.8:22-10.0.0.1:45980.service - OpenSSH per-connection server daemon (10.0.0.1:45980). Feb 13 20:20:05.537100 sshd[3259]: Accepted publickey for core from 10.0.0.1 port 45980 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:05.538582 sshd[3259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:05.542155 systemd-logind[1421]: New session 36 of user core. Feb 13 20:20:05.552852 systemd[1]: Started session-36.scope - Session 36 of User core. Feb 13 20:20:05.655144 sshd[3259]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:05.658108 systemd[1]: sshd@35-10.0.0.8:22-10.0.0.1:45980.service: Deactivated successfully. Feb 13 20:20:05.660519 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 20:20:05.661449 systemd-logind[1421]: Session 36 logged out. Waiting for processes to exit. Feb 13 20:20:05.662427 systemd-logind[1421]: Removed session 36. Feb 13 20:20:10.387043 kubelet[2436]: E0213 20:20:10.386980 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:10.669440 systemd[1]: Started sshd@36-10.0.0.8:22-10.0.0.1:45994.service - OpenSSH per-connection server daemon (10.0.0.1:45994). Feb 13 20:20:10.704284 sshd[3274]: Accepted publickey for core from 10.0.0.1 port 45994 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:10.705412 sshd[3274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:10.708824 systemd-logind[1421]: New session 37 of user core. Feb 13 20:20:10.715845 systemd[1]: Started session-37.scope - Session 37 of User core. Feb 13 20:20:10.819814 sshd[3274]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:10.823104 systemd[1]: sshd@36-10.0.0.8:22-10.0.0.1:45994.service: Deactivated successfully. Feb 13 20:20:10.825320 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 20:20:10.825989 systemd-logind[1421]: Session 37 logged out. Waiting for processes to exit. Feb 13 20:20:10.826744 systemd-logind[1421]: Removed session 37. Feb 13 20:20:15.388561 kubelet[2436]: E0213 20:20:15.388512 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:15.846942 systemd[1]: Started sshd@37-10.0.0.8:22-10.0.0.1:55384.service - OpenSSH per-connection server daemon (10.0.0.1:55384). Feb 13 20:20:15.877852 sshd[3291]: Accepted publickey for core from 10.0.0.1 port 55384 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:15.879063 sshd[3291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:15.884250 systemd-logind[1421]: New session 38 of user core. Feb 13 20:20:15.898858 systemd[1]: Started session-38.scope - Session 38 of User core. Feb 13 20:20:16.006045 sshd[3291]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:16.009070 systemd[1]: sshd@37-10.0.0.8:22-10.0.0.1:55384.service: Deactivated successfully. Feb 13 20:20:16.010626 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 20:20:16.011199 systemd-logind[1421]: Session 38 logged out. Waiting for processes to exit. Feb 13 20:20:16.011990 systemd-logind[1421]: Removed session 38. Feb 13 20:20:17.309827 kubelet[2436]: E0213 20:20:17.309618 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:17.310254 kubelet[2436]: E0213 20:20:17.310202 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:20:20.389343 kubelet[2436]: E0213 20:20:20.389285 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:21.016201 systemd[1]: Started sshd@38-10.0.0.8:22-10.0.0.1:55398.service - OpenSSH per-connection server daemon (10.0.0.1:55398). Feb 13 20:20:21.052226 sshd[3306]: Accepted publickey for core from 10.0.0.1 port 55398 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:21.053384 sshd[3306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:21.056881 systemd-logind[1421]: New session 39 of user core. Feb 13 20:20:21.064870 systemd[1]: Started session-39.scope - Session 39 of User core. Feb 13 20:20:21.174128 sshd[3306]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:21.177760 systemd[1]: sshd@38-10.0.0.8:22-10.0.0.1:55398.service: Deactivated successfully. Feb 13 20:20:21.179254 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 20:20:21.181158 systemd-logind[1421]: Session 39 logged out. Waiting for processes to exit. Feb 13 20:20:21.181992 systemd-logind[1421]: Removed session 39. Feb 13 20:20:25.390397 kubelet[2436]: E0213 20:20:25.390352 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:26.185322 systemd[1]: Started sshd@39-10.0.0.8:22-10.0.0.1:33382.service - OpenSSH per-connection server daemon (10.0.0.1:33382). Feb 13 20:20:26.220627 sshd[3321]: Accepted publickey for core from 10.0.0.1 port 33382 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:26.221883 sshd[3321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:26.225793 systemd-logind[1421]: New session 40 of user core. Feb 13 20:20:26.232862 systemd[1]: Started session-40.scope - Session 40 of User core. Feb 13 20:20:26.342442 sshd[3321]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:26.345519 systemd[1]: sshd@39-10.0.0.8:22-10.0.0.1:33382.service: Deactivated successfully. Feb 13 20:20:26.347165 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 20:20:26.348388 systemd-logind[1421]: Session 40 logged out. Waiting for processes to exit. Feb 13 20:20:26.349318 systemd-logind[1421]: Removed session 40. Feb 13 20:20:30.391300 kubelet[2436]: E0213 20:20:30.391262 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:31.308913 kubelet[2436]: E0213 20:20:31.308821 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:31.309662 kubelet[2436]: E0213 20:20:31.309610 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:20:31.353502 systemd[1]: Started sshd@40-10.0.0.8:22-10.0.0.1:33386.service - OpenSSH per-connection server daemon (10.0.0.1:33386). Feb 13 20:20:31.389458 sshd[3337]: Accepted publickey for core from 10.0.0.1 port 33386 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:31.390797 sshd[3337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:31.394455 systemd-logind[1421]: New session 41 of user core. Feb 13 20:20:31.398846 systemd[1]: Started session-41.scope - Session 41 of User core. Feb 13 20:20:31.506127 sshd[3337]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:31.518217 systemd[1]: sshd@40-10.0.0.8:22-10.0.0.1:33386.service: Deactivated successfully. Feb 13 20:20:31.520018 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 20:20:31.521359 systemd-logind[1421]: Session 41 logged out. Waiting for processes to exit. Feb 13 20:20:31.529947 systemd[1]: Started sshd@41-10.0.0.8:22-10.0.0.1:33402.service - OpenSSH per-connection server daemon (10.0.0.1:33402). Feb 13 20:20:31.530728 systemd-logind[1421]: Removed session 41. Feb 13 20:20:31.561551 sshd[3352]: Accepted publickey for core from 10.0.0.1 port 33402 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:31.562660 sshd[3352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:31.566132 systemd-logind[1421]: New session 42 of user core. Feb 13 20:20:31.575833 systemd[1]: Started session-42.scope - Session 42 of User core. Feb 13 20:20:31.717781 sshd[3352]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:31.729055 systemd[1]: sshd@41-10.0.0.8:22-10.0.0.1:33402.service: Deactivated successfully. Feb 13 20:20:31.731428 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 20:20:31.733721 systemd-logind[1421]: Session 42 logged out. Waiting for processes to exit. Feb 13 20:20:31.744972 systemd[1]: Started sshd@42-10.0.0.8:22-10.0.0.1:33418.service - OpenSSH per-connection server daemon (10.0.0.1:33418). Feb 13 20:20:31.745872 systemd-logind[1421]: Removed session 42. Feb 13 20:20:31.777138 sshd[3365]: Accepted publickey for core from 10.0.0.1 port 33418 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:31.778474 sshd[3365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:31.782470 systemd-logind[1421]: New session 43 of user core. Feb 13 20:20:31.794870 systemd[1]: Started session-43.scope - Session 43 of User core. Feb 13 20:20:31.902914 sshd[3365]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:31.906257 systemd[1]: sshd@42-10.0.0.8:22-10.0.0.1:33418.service: Deactivated successfully. Feb 13 20:20:31.908537 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 20:20:31.909192 systemd-logind[1421]: Session 43 logged out. Waiting for processes to exit. Feb 13 20:20:31.910015 systemd-logind[1421]: Removed session 43. Feb 13 20:20:35.392504 kubelet[2436]: E0213 20:20:35.392450 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:36.919405 systemd[1]: Started sshd@43-10.0.0.8:22-10.0.0.1:57688.service - OpenSSH per-connection server daemon (10.0.0.1:57688). Feb 13 20:20:36.954358 sshd[3380]: Accepted publickey for core from 10.0.0.1 port 57688 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:36.955513 sshd[3380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:36.958705 systemd-logind[1421]: New session 44 of user core. Feb 13 20:20:36.973866 systemd[1]: Started session-44.scope - Session 44 of User core. Feb 13 20:20:37.083819 sshd[3380]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:37.086838 systemd[1]: sshd@43-10.0.0.8:22-10.0.0.1:57688.service: Deactivated successfully. Feb 13 20:20:37.089463 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 20:20:37.090322 systemd-logind[1421]: Session 44 logged out. Waiting for processes to exit. Feb 13 20:20:37.091175 systemd-logind[1421]: Removed session 44. Feb 13 20:20:40.393698 kubelet[2436]: E0213 20:20:40.393659 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:42.098309 systemd[1]: Started sshd@44-10.0.0.8:22-10.0.0.1:57692.service - OpenSSH per-connection server daemon (10.0.0.1:57692). Feb 13 20:20:42.134445 sshd[3395]: Accepted publickey for core from 10.0.0.1 port 57692 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:42.135600 sshd[3395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:42.139390 systemd-logind[1421]: New session 45 of user core. Feb 13 20:20:42.154864 systemd[1]: Started session-45.scope - Session 45 of User core. Feb 13 20:20:42.260855 sshd[3395]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:42.263349 systemd[1]: sshd@44-10.0.0.8:22-10.0.0.1:57692.service: Deactivated successfully. Feb 13 20:20:42.264929 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 20:20:42.266177 systemd-logind[1421]: Session 45 logged out. Waiting for processes to exit. Feb 13 20:20:42.266941 systemd-logind[1421]: Removed session 45. Feb 13 20:20:44.309198 kubelet[2436]: E0213 20:20:44.309152 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:44.309886 kubelet[2436]: E0213 20:20:44.309843 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:20:45.395108 kubelet[2436]: E0213 20:20:45.395073 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:47.287974 systemd[1]: Started sshd@45-10.0.0.8:22-10.0.0.1:55468.service - OpenSSH per-connection server daemon (10.0.0.1:55468). Feb 13 20:20:47.320859 sshd[3413]: Accepted publickey for core from 10.0.0.1 port 55468 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:47.322123 sshd[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:47.325517 systemd-logind[1421]: New session 46 of user core. Feb 13 20:20:47.341855 systemd[1]: Started session-46.scope - Session 46 of User core. Feb 13 20:20:47.449406 sshd[3413]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:47.452531 systemd[1]: sshd@45-10.0.0.8:22-10.0.0.1:55468.service: Deactivated successfully. Feb 13 20:20:47.454194 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 20:20:47.455697 systemd-logind[1421]: Session 46 logged out. Waiting for processes to exit. Feb 13 20:20:47.456885 systemd-logind[1421]: Removed session 46. Feb 13 20:20:50.396692 kubelet[2436]: E0213 20:20:50.396638 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:52.459792 systemd[1]: Started sshd@46-10.0.0.8:22-10.0.0.1:36146.service - OpenSSH per-connection server daemon (10.0.0.1:36146). Feb 13 20:20:52.495238 sshd[3428]: Accepted publickey for core from 10.0.0.1 port 36146 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:52.496407 sshd[3428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:52.500294 systemd-logind[1421]: New session 47 of user core. Feb 13 20:20:52.509842 systemd[1]: Started session-47.scope - Session 47 of User core. Feb 13 20:20:52.616321 sshd[3428]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:52.619546 systemd[1]: sshd@46-10.0.0.8:22-10.0.0.1:36146.service: Deactivated successfully. Feb 13 20:20:52.621193 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 20:20:52.621774 systemd-logind[1421]: Session 47 logged out. Waiting for processes to exit. Feb 13 20:20:52.622463 systemd-logind[1421]: Removed session 47. Feb 13 20:20:55.397941 kubelet[2436]: E0213 20:20:55.397892 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:56.309489 kubelet[2436]: E0213 20:20:56.309297 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:56.310001 kubelet[2436]: E0213 20:20:56.309964 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:20:57.627137 systemd[1]: Started sshd@47-10.0.0.8:22-10.0.0.1:36152.service - OpenSSH per-connection server daemon (10.0.0.1:36152). Feb 13 20:20:57.662216 sshd[3442]: Accepted publickey for core from 10.0.0.1 port 36152 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:57.663399 sshd[3442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:57.667261 systemd-logind[1421]: New session 48 of user core. Feb 13 20:20:57.676912 systemd[1]: Started session-48.scope - Session 48 of User core. Feb 13 20:20:57.783524 sshd[3442]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:57.786611 systemd[1]: sshd@47-10.0.0.8:22-10.0.0.1:36152.service: Deactivated successfully. Feb 13 20:20:57.788833 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 20:20:57.789377 systemd-logind[1421]: Session 48 logged out. Waiting for processes to exit. Feb 13 20:20:57.790135 systemd-logind[1421]: Removed session 48. Feb 13 20:21:00.399334 kubelet[2436]: E0213 20:21:00.399128 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:01.309291 kubelet[2436]: E0213 20:21:01.309219 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:02.794272 systemd[1]: Started sshd@48-10.0.0.8:22-10.0.0.1:33984.service - OpenSSH per-connection server daemon (10.0.0.1:33984). Feb 13 20:21:02.829602 sshd[3456]: Accepted publickey for core from 10.0.0.1 port 33984 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:02.830859 sshd[3456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:02.834538 systemd-logind[1421]: New session 49 of user core. Feb 13 20:21:02.846838 systemd[1]: Started session-49.scope - Session 49 of User core. Feb 13 20:21:02.953023 sshd[3456]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:02.956238 systemd[1]: sshd@48-10.0.0.8:22-10.0.0.1:33984.service: Deactivated successfully. Feb 13 20:21:02.957838 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 20:21:02.958402 systemd-logind[1421]: Session 49 logged out. Waiting for processes to exit. Feb 13 20:21:02.959237 systemd-logind[1421]: Removed session 49. Feb 13 20:21:05.399733 kubelet[2436]: E0213 20:21:05.399678 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:07.310281 kubelet[2436]: E0213 20:21:07.310114 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:07.310718 kubelet[2436]: E0213 20:21:07.310330 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:07.311135 kubelet[2436]: E0213 20:21:07.311033 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:21:07.963282 systemd[1]: Started sshd@49-10.0.0.8:22-10.0.0.1:33994.service - OpenSSH per-connection server daemon (10.0.0.1:33994). Feb 13 20:21:07.998194 sshd[3472]: Accepted publickey for core from 10.0.0.1 port 33994 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:07.999439 sshd[3472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:08.003368 systemd-logind[1421]: New session 50 of user core. Feb 13 20:21:08.013859 systemd[1]: Started session-50.scope - Session 50 of User core. Feb 13 20:21:08.125929 sshd[3472]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:08.129185 systemd[1]: sshd@49-10.0.0.8:22-10.0.0.1:33994.service: Deactivated successfully. Feb 13 20:21:08.131020 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 20:21:08.131652 systemd-logind[1421]: Session 50 logged out. Waiting for processes to exit. Feb 13 20:21:08.132374 systemd-logind[1421]: Removed session 50. Feb 13 20:21:10.400602 kubelet[2436]: E0213 20:21:10.400561 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:13.136211 systemd[1]: Started sshd@50-10.0.0.8:22-10.0.0.1:53874.service - OpenSSH per-connection server daemon (10.0.0.1:53874). Feb 13 20:21:13.171246 sshd[3488]: Accepted publickey for core from 10.0.0.1 port 53874 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:13.172417 sshd[3488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:13.176296 systemd-logind[1421]: New session 51 of user core. Feb 13 20:21:13.186864 systemd[1]: Started session-51.scope - Session 51 of User core. Feb 13 20:21:13.295568 sshd[3488]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:13.299077 systemd[1]: sshd@50-10.0.0.8:22-10.0.0.1:53874.service: Deactivated successfully. Feb 13 20:21:13.300678 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 20:21:13.301494 systemd-logind[1421]: Session 51 logged out. Waiting for processes to exit. Feb 13 20:21:13.302267 systemd-logind[1421]: Removed session 51. Feb 13 20:21:15.401666 kubelet[2436]: E0213 20:21:15.401628 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:18.305262 systemd[1]: Started sshd@51-10.0.0.8:22-10.0.0.1:53880.service - OpenSSH per-connection server daemon (10.0.0.1:53880). Feb 13 20:21:18.340474 sshd[3506]: Accepted publickey for core from 10.0.0.1 port 53880 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:18.341652 sshd[3506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:18.345216 systemd-logind[1421]: New session 52 of user core. Feb 13 20:21:18.355931 systemd[1]: Started session-52.scope - Session 52 of User core. Feb 13 20:21:18.461583 sshd[3506]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:18.464212 systemd[1]: sshd@51-10.0.0.8:22-10.0.0.1:53880.service: Deactivated successfully. Feb 13 20:21:18.465983 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 20:21:18.467151 systemd-logind[1421]: Session 52 logged out. Waiting for processes to exit. Feb 13 20:21:18.468498 systemd-logind[1421]: Removed session 52. Feb 13 20:21:19.309241 kubelet[2436]: E0213 20:21:19.309047 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:19.309775 kubelet[2436]: E0213 20:21:19.309729 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:21:20.403250 kubelet[2436]: E0213 20:21:20.403196 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:23.473393 systemd[1]: Started sshd@52-10.0.0.8:22-10.0.0.1:44044.service - OpenSSH per-connection server daemon (10.0.0.1:44044). Feb 13 20:21:23.508341 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 44044 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:23.509539 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:23.512966 systemd-logind[1421]: New session 53 of user core. Feb 13 20:21:23.524852 systemd[1]: Started session-53.scope - Session 53 of User core. Feb 13 20:21:23.631069 sshd[3520]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:23.633479 systemd[1]: sshd@52-10.0.0.8:22-10.0.0.1:44044.service: Deactivated successfully. Feb 13 20:21:23.635171 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 20:21:23.636492 systemd-logind[1421]: Session 53 logged out. Waiting for processes to exit. Feb 13 20:21:23.637535 systemd-logind[1421]: Removed session 53. Feb 13 20:21:25.311410 kubelet[2436]: E0213 20:21:25.311363 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:25.404204 kubelet[2436]: E0213 20:21:25.404159 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:27.309766 kubelet[2436]: E0213 20:21:27.309526 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:28.645487 systemd[1]: Started sshd@53-10.0.0.8:22-10.0.0.1:44054.service - OpenSSH per-connection server daemon (10.0.0.1:44054). Feb 13 20:21:28.681322 sshd[3535]: Accepted publickey for core from 10.0.0.1 port 44054 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:28.682519 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:28.685797 systemd-logind[1421]: New session 54 of user core. Feb 13 20:21:28.699882 systemd[1]: Started session-54.scope - Session 54 of User core. Feb 13 20:21:28.806703 sshd[3535]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:28.809819 systemd[1]: sshd@53-10.0.0.8:22-10.0.0.1:44054.service: Deactivated successfully. Feb 13 20:21:28.812207 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 20:21:28.813015 systemd-logind[1421]: Session 54 logged out. Waiting for processes to exit. Feb 13 20:21:28.813926 systemd-logind[1421]: Removed session 54. Feb 13 20:21:30.405820 kubelet[2436]: E0213 20:21:30.405775 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:32.309394 kubelet[2436]: E0213 20:21:32.309352 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:32.310053 kubelet[2436]: E0213 20:21:32.309939 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:21:33.817211 systemd[1]: Started sshd@54-10.0.0.8:22-10.0.0.1:43354.service - OpenSSH per-connection server daemon (10.0.0.1:43354). Feb 13 20:21:33.852097 sshd[3550]: Accepted publickey for core from 10.0.0.1 port 43354 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:33.853302 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:33.856700 systemd-logind[1421]: New session 55 of user core. Feb 13 20:21:33.863855 systemd[1]: Started session-55.scope - Session 55 of User core. Feb 13 20:21:33.970694 sshd[3550]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:33.973951 systemd[1]: sshd@54-10.0.0.8:22-10.0.0.1:43354.service: Deactivated successfully. Feb 13 20:21:33.975906 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 20:21:33.976454 systemd-logind[1421]: Session 55 logged out. Waiting for processes to exit. Feb 13 20:21:33.977255 systemd-logind[1421]: Removed session 55. Feb 13 20:21:35.407004 kubelet[2436]: E0213 20:21:35.406948 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:38.985330 systemd[1]: Started sshd@55-10.0.0.8:22-10.0.0.1:43364.service - OpenSSH per-connection server daemon (10.0.0.1:43364). Feb 13 20:21:39.020315 sshd[3565]: Accepted publickey for core from 10.0.0.1 port 43364 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:39.021555 sshd[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:39.024934 systemd-logind[1421]: New session 56 of user core. Feb 13 20:21:39.036837 systemd[1]: Started session-56.scope - Session 56 of User core. Feb 13 20:21:39.143507 sshd[3565]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:39.146702 systemd[1]: sshd@55-10.0.0.8:22-10.0.0.1:43364.service: Deactivated successfully. Feb 13 20:21:39.148428 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 20:21:39.149522 systemd-logind[1421]: Session 56 logged out. Waiting for processes to exit. Feb 13 20:21:39.150289 systemd-logind[1421]: Removed session 56. Feb 13 20:21:40.408092 kubelet[2436]: E0213 20:21:40.408052 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:44.154121 systemd[1]: Started sshd@56-10.0.0.8:22-10.0.0.1:48592.service - OpenSSH per-connection server daemon (10.0.0.1:48592). Feb 13 20:21:44.189567 sshd[3582]: Accepted publickey for core from 10.0.0.1 port 48592 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:44.190682 sshd[3582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:44.194036 systemd-logind[1421]: New session 57 of user core. Feb 13 20:21:44.209841 systemd[1]: Started session-57.scope - Session 57 of User core. Feb 13 20:21:44.309916 kubelet[2436]: E0213 20:21:44.309873 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:44.310729 kubelet[2436]: E0213 20:21:44.310681 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:21:44.317241 sshd[3582]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:44.320413 systemd[1]: sshd@56-10.0.0.8:22-10.0.0.1:48592.service: Deactivated successfully. Feb 13 20:21:44.322088 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 20:21:44.323222 systemd-logind[1421]: Session 57 logged out. Waiting for processes to exit. Feb 13 20:21:44.324049 systemd-logind[1421]: Removed session 57. Feb 13 20:21:45.409681 kubelet[2436]: E0213 20:21:45.409623 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:49.330177 systemd[1]: Started sshd@57-10.0.0.8:22-10.0.0.1:48604.service - OpenSSH per-connection server daemon (10.0.0.1:48604). Feb 13 20:21:49.365224 sshd[3597]: Accepted publickey for core from 10.0.0.1 port 48604 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:49.366363 sshd[3597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:49.369687 systemd-logind[1421]: New session 58 of user core. Feb 13 20:21:49.379850 systemd[1]: Started session-58.scope - Session 58 of User core. Feb 13 20:21:49.485146 sshd[3597]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:49.488273 systemd[1]: sshd@57-10.0.0.8:22-10.0.0.1:48604.service: Deactivated successfully. Feb 13 20:21:49.491080 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 20:21:49.491817 systemd-logind[1421]: Session 58 logged out. Waiting for processes to exit. Feb 13 20:21:49.492649 systemd-logind[1421]: Removed session 58. Feb 13 20:21:50.410662 kubelet[2436]: E0213 20:21:50.410623 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:54.496266 systemd[1]: Started sshd@58-10.0.0.8:22-10.0.0.1:53372.service - OpenSSH per-connection server daemon (10.0.0.1:53372). Feb 13 20:21:54.531457 sshd[3611]: Accepted publickey for core from 10.0.0.1 port 53372 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:54.532660 sshd[3611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:54.536057 systemd-logind[1421]: New session 59 of user core. Feb 13 20:21:54.545861 systemd[1]: Started session-59.scope - Session 59 of User core. Feb 13 20:21:54.651236 sshd[3611]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:54.654383 systemd[1]: sshd@58-10.0.0.8:22-10.0.0.1:53372.service: Deactivated successfully. Feb 13 20:21:54.655947 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 20:21:54.656466 systemd-logind[1421]: Session 59 logged out. Waiting for processes to exit. Feb 13 20:21:54.657249 systemd-logind[1421]: Removed session 59. Feb 13 20:21:55.411998 kubelet[2436]: E0213 20:21:55.411949 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:58.309242 kubelet[2436]: E0213 20:21:58.309061 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:58.309681 kubelet[2436]: E0213 20:21:58.309635 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:21:59.663134 systemd[1]: Started sshd@59-10.0.0.8:22-10.0.0.1:53380.service - OpenSSH per-connection server daemon (10.0.0.1:53380). Feb 13 20:21:59.698025 sshd[3626]: Accepted publickey for core from 10.0.0.1 port 53380 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:59.699133 sshd[3626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:59.702774 systemd-logind[1421]: New session 60 of user core. Feb 13 20:21:59.709843 systemd[1]: Started session-60.scope - Session 60 of User core. Feb 13 20:21:59.817029 sshd[3626]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:59.820259 systemd[1]: sshd@59-10.0.0.8:22-10.0.0.1:53380.service: Deactivated successfully. Feb 13 20:21:59.823221 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 20:21:59.823849 systemd-logind[1421]: Session 60 logged out. Waiting for processes to exit. Feb 13 20:21:59.824523 systemd-logind[1421]: Removed session 60. Feb 13 20:22:00.413350 kubelet[2436]: E0213 20:22:00.413312 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:04.827407 systemd[1]: Started sshd@60-10.0.0.8:22-10.0.0.1:47534.service - OpenSSH per-connection server daemon (10.0.0.1:47534). Feb 13 20:22:04.863200 sshd[3642]: Accepted publickey for core from 10.0.0.1 port 47534 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:04.864393 sshd[3642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:04.867761 systemd-logind[1421]: New session 61 of user core. Feb 13 20:22:04.882857 systemd[1]: Started session-61.scope - Session 61 of User core. Feb 13 20:22:04.989805 sshd[3642]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:04.992971 systemd[1]: sshd@60-10.0.0.8:22-10.0.0.1:47534.service: Deactivated successfully. Feb 13 20:22:04.995362 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 20:22:04.996227 systemd-logind[1421]: Session 61 logged out. Waiting for processes to exit. Feb 13 20:22:04.997143 systemd-logind[1421]: Removed session 61. Feb 13 20:22:05.414203 kubelet[2436]: E0213 20:22:05.414171 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:10.000214 systemd[1]: Started sshd@61-10.0.0.8:22-10.0.0.1:47542.service - OpenSSH per-connection server daemon (10.0.0.1:47542). Feb 13 20:22:10.035463 sshd[3660]: Accepted publickey for core from 10.0.0.1 port 47542 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:10.036633 sshd[3660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:10.039975 systemd-logind[1421]: New session 62 of user core. Feb 13 20:22:10.051860 systemd[1]: Started session-62.scope - Session 62 of User core. Feb 13 20:22:10.160041 sshd[3660]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:10.163188 systemd[1]: sshd@61-10.0.0.8:22-10.0.0.1:47542.service: Deactivated successfully. Feb 13 20:22:10.165537 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 20:22:10.166264 systemd-logind[1421]: Session 62 logged out. Waiting for processes to exit. Feb 13 20:22:10.167083 systemd-logind[1421]: Removed session 62. Feb 13 20:22:10.309169 kubelet[2436]: E0213 20:22:10.308990 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:10.309866 kubelet[2436]: E0213 20:22:10.309576 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:22:10.415197 kubelet[2436]: E0213 20:22:10.415156 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:15.170433 systemd[1]: Started sshd@62-10.0.0.8:22-10.0.0.1:34442.service - OpenSSH per-connection server daemon (10.0.0.1:34442). Feb 13 20:22:15.205467 sshd[3677]: Accepted publickey for core from 10.0.0.1 port 34442 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:15.206635 sshd[3677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:15.210376 systemd-logind[1421]: New session 63 of user core. Feb 13 20:22:15.219850 systemd[1]: Started session-63.scope - Session 63 of User core. Feb 13 20:22:15.310087 kubelet[2436]: E0213 20:22:15.310058 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:15.328802 sshd[3677]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:15.331952 systemd[1]: sshd@62-10.0.0.8:22-10.0.0.1:34442.service: Deactivated successfully. Feb 13 20:22:15.333584 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 20:22:15.334156 systemd-logind[1421]: Session 63 logged out. Waiting for processes to exit. Feb 13 20:22:15.334927 systemd-logind[1421]: Removed session 63. Feb 13 20:22:15.416518 kubelet[2436]: E0213 20:22:15.416469 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:20.344230 systemd[1]: Started sshd@63-10.0.0.8:22-10.0.0.1:34452.service - OpenSSH per-connection server daemon (10.0.0.1:34452). Feb 13 20:22:20.379628 sshd[3691]: Accepted publickey for core from 10.0.0.1 port 34452 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:20.380813 sshd[3691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:20.384670 systemd-logind[1421]: New session 64 of user core. Feb 13 20:22:20.392911 systemd[1]: Started session-64.scope - Session 64 of User core. Feb 13 20:22:20.417511 kubelet[2436]: E0213 20:22:20.417408 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:20.500080 sshd[3691]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:20.503276 systemd[1]: sshd@63-10.0.0.8:22-10.0.0.1:34452.service: Deactivated successfully. Feb 13 20:22:20.505009 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 20:22:20.506571 systemd-logind[1421]: Session 64 logged out. Waiting for processes to exit. Feb 13 20:22:20.507537 systemd-logind[1421]: Removed session 64. Feb 13 20:22:22.309211 kubelet[2436]: E0213 20:22:22.309162 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:22.309975 kubelet[2436]: E0213 20:22:22.309901 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:22:25.418470 kubelet[2436]: E0213 20:22:25.418422 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:25.510188 systemd[1]: Started sshd@64-10.0.0.8:22-10.0.0.1:48010.service - OpenSSH per-connection server daemon (10.0.0.1:48010). Feb 13 20:22:25.545136 sshd[3706]: Accepted publickey for core from 10.0.0.1 port 48010 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:25.546382 sshd[3706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:25.549775 systemd-logind[1421]: New session 65 of user core. Feb 13 20:22:25.560846 systemd[1]: Started session-65.scope - Session 65 of User core. Feb 13 20:22:25.665805 sshd[3706]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:25.669254 systemd[1]: sshd@64-10.0.0.8:22-10.0.0.1:48010.service: Deactivated successfully. Feb 13 20:22:25.671364 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 20:22:25.673115 systemd-logind[1421]: Session 65 logged out. Waiting for processes to exit. Feb 13 20:22:25.674056 systemd-logind[1421]: Removed session 65. Feb 13 20:22:29.308992 kubelet[2436]: E0213 20:22:29.308960 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:30.309013 kubelet[2436]: E0213 20:22:30.308966 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:30.309374 kubelet[2436]: E0213 20:22:30.309063 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:30.419182 kubelet[2436]: E0213 20:22:30.419142 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:30.676176 systemd[1]: Started sshd@65-10.0.0.8:22-10.0.0.1:48014.service - OpenSSH per-connection server daemon (10.0.0.1:48014). Feb 13 20:22:30.711155 sshd[3720]: Accepted publickey for core from 10.0.0.1 port 48014 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:30.712343 sshd[3720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:30.716613 systemd-logind[1421]: New session 66 of user core. Feb 13 20:22:30.724836 systemd[1]: Started session-66.scope - Session 66 of User core. Feb 13 20:22:30.833429 sshd[3720]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:30.836515 systemd[1]: sshd@65-10.0.0.8:22-10.0.0.1:48014.service: Deactivated successfully. Feb 13 20:22:30.838126 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 20:22:30.838689 systemd-logind[1421]: Session 66 logged out. Waiting for processes to exit. Feb 13 20:22:30.839632 systemd-logind[1421]: Removed session 66. Feb 13 20:22:33.309457 kubelet[2436]: E0213 20:22:33.309420 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:33.310613 kubelet[2436]: E0213 20:22:33.310363 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:22:35.419719 kubelet[2436]: E0213 20:22:35.419675 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:35.843930 systemd[1]: Started sshd@66-10.0.0.8:22-10.0.0.1:57122.service - OpenSSH per-connection server daemon (10.0.0.1:57122). Feb 13 20:22:35.879051 sshd[3735]: Accepted publickey for core from 10.0.0.1 port 57122 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:35.880207 sshd[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:35.883780 systemd-logind[1421]: New session 67 of user core. Feb 13 20:22:35.893842 systemd[1]: Started session-67.scope - Session 67 of User core. Feb 13 20:22:36.001614 sshd[3735]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:36.004977 systemd[1]: sshd@66-10.0.0.8:22-10.0.0.1:57122.service: Deactivated successfully. Feb 13 20:22:36.006613 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 20:22:36.007812 systemd-logind[1421]: Session 67 logged out. Waiting for processes to exit. Feb 13 20:22:36.008572 systemd-logind[1421]: Removed session 67. Feb 13 20:22:40.420853 kubelet[2436]: E0213 20:22:40.420811 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:41.012187 systemd[1]: Started sshd@67-10.0.0.8:22-10.0.0.1:57138.service - OpenSSH per-connection server daemon (10.0.0.1:57138). Feb 13 20:22:41.048530 sshd[3749]: Accepted publickey for core from 10.0.0.1 port 57138 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:41.049752 sshd[3749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:41.054751 systemd-logind[1421]: New session 68 of user core. Feb 13 20:22:41.068846 systemd[1]: Started session-68.scope - Session 68 of User core. Feb 13 20:22:41.177347 sshd[3749]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:41.180897 systemd[1]: sshd@67-10.0.0.8:22-10.0.0.1:57138.service: Deactivated successfully. Feb 13 20:22:41.183313 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 20:22:41.184297 systemd-logind[1421]: Session 68 logged out. Waiting for processes to exit. Feb 13 20:22:41.185149 systemd-logind[1421]: Removed session 68. Feb 13 20:22:45.309996 kubelet[2436]: E0213 20:22:45.309628 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:45.311245 containerd[1438]: time="2025-02-13T20:22:45.311200401Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:22:45.421429 kubelet[2436]: E0213 20:22:45.421396 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:46.192170 systemd[1]: Started sshd@68-10.0.0.8:22-10.0.0.1:56828.service - OpenSSH per-connection server daemon (10.0.0.1:56828). Feb 13 20:22:46.227522 sshd[3766]: Accepted publickey for core from 10.0.0.1 port 56828 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:46.228742 sshd[3766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:46.232226 systemd-logind[1421]: New session 69 of user core. Feb 13 20:22:46.243916 systemd[1]: Started session-69.scope - Session 69 of User core. Feb 13 20:22:46.353146 sshd[3766]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:46.356379 systemd[1]: sshd@68-10.0.0.8:22-10.0.0.1:56828.service: Deactivated successfully. Feb 13 20:22:46.357989 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 20:22:46.358603 systemd-logind[1421]: Session 69 logged out. Waiting for processes to exit. Feb 13 20:22:46.359325 systemd-logind[1421]: Removed session 69. Feb 13 20:22:46.430088 containerd[1438]: time="2025-02-13T20:22:46.430035150Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:22:46.430395 containerd[1438]: time="2025-02-13T20:22:46.430139990Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:22:46.430426 kubelet[2436]: E0213 20:22:46.430264 2436 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:22:46.430426 kubelet[2436]: E0213 20:22:46.430311 2436 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:22:46.430759 kubelet[2436]: E0213 20:22:46.430407 2436 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvd57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-l2mj5_kube-flannel(71c6d29d-8b7d-4b8f-92a3-710fe670a99c): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:22:46.431628 kubelet[2436]: E0213 20:22:46.431573 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:22:50.422174 kubelet[2436]: E0213 20:22:50.422114 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:51.363403 systemd[1]: Started sshd@69-10.0.0.8:22-10.0.0.1:56834.service - OpenSSH per-connection server daemon (10.0.0.1:56834). Feb 13 20:22:51.403719 sshd[3782]: Accepted publickey for core from 10.0.0.1 port 56834 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:51.404913 sshd[3782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:51.408762 systemd-logind[1421]: New session 70 of user core. Feb 13 20:22:51.417917 systemd[1]: Started session-70.scope - Session 70 of User core. Feb 13 20:22:51.523784 sshd[3782]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:51.527073 systemd[1]: sshd@69-10.0.0.8:22-10.0.0.1:56834.service: Deactivated successfully. Feb 13 20:22:51.529379 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 20:22:51.530087 systemd-logind[1421]: Session 70 logged out. Waiting for processes to exit. Feb 13 20:22:51.531317 systemd-logind[1421]: Removed session 70. Feb 13 20:22:55.423018 kubelet[2436]: E0213 20:22:55.422897 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:56.539266 systemd[1]: Started sshd@70-10.0.0.8:22-10.0.0.1:38924.service - OpenSSH per-connection server daemon (10.0.0.1:38924). Feb 13 20:22:56.575059 sshd[3797]: Accepted publickey for core from 10.0.0.1 port 38924 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:56.576298 sshd[3797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:56.580187 systemd-logind[1421]: New session 71 of user core. Feb 13 20:22:56.593850 systemd[1]: Started session-71.scope - Session 71 of User core. Feb 13 20:22:56.704403 sshd[3797]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:56.707468 systemd[1]: sshd@70-10.0.0.8:22-10.0.0.1:38924.service: Deactivated successfully. Feb 13 20:22:56.709796 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 20:22:56.710647 systemd-logind[1421]: Session 71 logged out. Waiting for processes to exit. Feb 13 20:22:56.711609 systemd-logind[1421]: Removed session 71. Feb 13 20:22:57.309095 kubelet[2436]: E0213 20:22:57.308912 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:57.309725 kubelet[2436]: E0213 20:22:57.309661 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:23:00.424556 kubelet[2436]: E0213 20:23:00.424511 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:01.719421 systemd[1]: Started sshd@71-10.0.0.8:22-10.0.0.1:38930.service - OpenSSH per-connection server daemon (10.0.0.1:38930). Feb 13 20:23:01.755238 sshd[3811]: Accepted publickey for core from 10.0.0.1 port 38930 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:01.756446 sshd[3811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:01.760487 systemd-logind[1421]: New session 72 of user core. Feb 13 20:23:01.769866 systemd[1]: Started session-72.scope - Session 72 of User core. Feb 13 20:23:01.877453 sshd[3811]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:01.880041 systemd[1]: sshd@71-10.0.0.8:22-10.0.0.1:38930.service: Deactivated successfully. Feb 13 20:23:01.881679 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 20:23:01.883159 systemd-logind[1421]: Session 72 logged out. Waiting for processes to exit. Feb 13 20:23:01.884144 systemd-logind[1421]: Removed session 72. Feb 13 20:23:05.425375 kubelet[2436]: E0213 20:23:05.425336 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:06.888328 systemd[1]: Started sshd@72-10.0.0.8:22-10.0.0.1:47696.service - OpenSSH per-connection server daemon (10.0.0.1:47696). Feb 13 20:23:06.923752 sshd[3829]: Accepted publickey for core from 10.0.0.1 port 47696 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:06.925043 sshd[3829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:06.928764 systemd-logind[1421]: New session 73 of user core. Feb 13 20:23:06.935840 systemd[1]: Started session-73.scope - Session 73 of User core. Feb 13 20:23:07.043180 sshd[3829]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:07.046914 systemd[1]: sshd@72-10.0.0.8:22-10.0.0.1:47696.service: Deactivated successfully. Feb 13 20:23:07.049082 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 20:23:07.049744 systemd-logind[1421]: Session 73 logged out. Waiting for processes to exit. Feb 13 20:23:07.051031 systemd-logind[1421]: Removed session 73. Feb 13 20:23:10.426453 kubelet[2436]: E0213 20:23:10.426419 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:11.308959 kubelet[2436]: E0213 20:23:11.308869 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:11.309819 kubelet[2436]: E0213 20:23:11.309675 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:23:12.054815 systemd[1]: Started sshd@73-10.0.0.8:22-10.0.0.1:47702.service - OpenSSH per-connection server daemon (10.0.0.1:47702). Feb 13 20:23:12.089936 sshd[3843]: Accepted publickey for core from 10.0.0.1 port 47702 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:12.091158 sshd[3843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:12.094696 systemd-logind[1421]: New session 74 of user core. Feb 13 20:23:12.102847 systemd[1]: Started session-74.scope - Session 74 of User core. Feb 13 20:23:12.210257 sshd[3843]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:12.213344 systemd[1]: sshd@73-10.0.0.8:22-10.0.0.1:47702.service: Deactivated successfully. Feb 13 20:23:12.215886 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 20:23:12.216848 systemd-logind[1421]: Session 74 logged out. Waiting for processes to exit. Feb 13 20:23:12.217725 systemd-logind[1421]: Removed session 74. Feb 13 20:23:15.427596 kubelet[2436]: E0213 20:23:15.427510 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:17.233941 systemd[1]: Started sshd@74-10.0.0.8:22-10.0.0.1:42916.service - OpenSSH per-connection server daemon (10.0.0.1:42916). Feb 13 20:23:17.265551 sshd[3859]: Accepted publickey for core from 10.0.0.1 port 42916 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:17.266782 sshd[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:17.270057 systemd-logind[1421]: New session 75 of user core. Feb 13 20:23:17.279855 systemd[1]: Started session-75.scope - Session 75 of User core. Feb 13 20:23:17.388078 sshd[3859]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:17.392107 systemd[1]: sshd@74-10.0.0.8:22-10.0.0.1:42916.service: Deactivated successfully. Feb 13 20:23:17.393855 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 20:23:17.394455 systemd-logind[1421]: Session 75 logged out. Waiting for processes to exit. Feb 13 20:23:17.395213 systemd-logind[1421]: Removed session 75. Feb 13 20:23:20.429055 kubelet[2436]: E0213 20:23:20.429009 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:21.133751 update_engine[1428]: I20250213 20:23:21.133648 1428 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 20:23:21.133751 update_engine[1428]: I20250213 20:23:21.133740 1428 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 20:23:21.134119 update_engine[1428]: I20250213 20:23:21.134006 1428 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 20:23:21.134388 update_engine[1428]: I20250213 20:23:21.134348 1428 omaha_request_params.cc:62] Current group set to lts Feb 13 20:23:21.134703 update_engine[1428]: I20250213 20:23:21.134442 1428 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 20:23:21.134703 update_engine[1428]: I20250213 20:23:21.134454 1428 update_attempter.cc:643] Scheduling an action processor start. Feb 13 20:23:21.134703 update_engine[1428]: I20250213 20:23:21.134469 1428 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:23:21.134703 update_engine[1428]: I20250213 20:23:21.134496 1428 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 20:23:21.134703 update_engine[1428]: I20250213 20:23:21.134539 1428 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:23:21.134703 update_engine[1428]: I20250213 20:23:21.134546 1428 omaha_request_action.cc:272] Request: Feb 13 20:23:21.134703 update_engine[1428]: Feb 13 20:23:21.134703 update_engine[1428]: Feb 13 20:23:21.134703 update_engine[1428]: Feb 13 20:23:21.134703 update_engine[1428]: Feb 13 20:23:21.134703 update_engine[1428]: Feb 13 20:23:21.134703 update_engine[1428]: Feb 13 20:23:21.134703 update_engine[1428]: Feb 13 20:23:21.134703 update_engine[1428]: Feb 13 20:23:21.134703 update_engine[1428]: I20250213 20:23:21.134552 1428 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:23:21.135016 locksmithd[1463]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 20:23:21.135586 update_engine[1428]: I20250213 20:23:21.135546 1428 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:23:21.135824 update_engine[1428]: I20250213 20:23:21.135794 1428 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:23:21.139651 update_engine[1428]: E20250213 20:23:21.139613 1428 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:23:21.139692 update_engine[1428]: I20250213 20:23:21.139678 1428 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 20:23:22.399234 systemd[1]: Started sshd@75-10.0.0.8:22-10.0.0.1:42928.service - OpenSSH per-connection server daemon (10.0.0.1:42928). Feb 13 20:23:22.434491 sshd[3873]: Accepted publickey for core from 10.0.0.1 port 42928 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:22.435665 sshd[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:22.439204 systemd-logind[1421]: New session 76 of user core. Feb 13 20:23:22.448845 systemd[1]: Started session-76.scope - Session 76 of User core. Feb 13 20:23:22.556662 sshd[3873]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:22.559764 systemd[1]: sshd@75-10.0.0.8:22-10.0.0.1:42928.service: Deactivated successfully. Feb 13 20:23:22.562065 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 20:23:22.562761 systemd-logind[1421]: Session 76 logged out. Waiting for processes to exit. Feb 13 20:23:22.563530 systemd-logind[1421]: Removed session 76. Feb 13 20:23:24.308865 kubelet[2436]: E0213 20:23:24.308818 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:24.309477 kubelet[2436]: E0213 20:23:24.309451 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:23:25.430728 kubelet[2436]: E0213 20:23:25.430678 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:27.567250 systemd[1]: Started sshd@76-10.0.0.8:22-10.0.0.1:53802.service - OpenSSH per-connection server daemon (10.0.0.1:53802). Feb 13 20:23:27.602101 sshd[3889]: Accepted publickey for core from 10.0.0.1 port 53802 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:27.603262 sshd[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:27.606543 systemd-logind[1421]: New session 77 of user core. Feb 13 20:23:27.616851 systemd[1]: Started session-77.scope - Session 77 of User core. Feb 13 20:23:27.724254 sshd[3889]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:27.727518 systemd[1]: sshd@76-10.0.0.8:22-10.0.0.1:53802.service: Deactivated successfully. Feb 13 20:23:27.729105 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 20:23:27.730262 systemd-logind[1421]: Session 77 logged out. Waiting for processes to exit. Feb 13 20:23:27.731146 systemd-logind[1421]: Removed session 77. Feb 13 20:23:29.309799 kubelet[2436]: E0213 20:23:29.309521 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:30.432002 kubelet[2436]: E0213 20:23:30.431961 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:31.133871 update_engine[1428]: I20250213 20:23:31.133791 1428 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:23:31.134184 update_engine[1428]: I20250213 20:23:31.134050 1428 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:23:31.134278 update_engine[1428]: I20250213 20:23:31.134208 1428 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:23:31.140728 update_engine[1428]: E20250213 20:23:31.140661 1428 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:23:31.140797 update_engine[1428]: I20250213 20:23:31.140738 1428 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 20:23:32.735210 systemd[1]: Started sshd@77-10.0.0.8:22-10.0.0.1:50544.service - OpenSSH per-connection server daemon (10.0.0.1:50544). Feb 13 20:23:32.770240 sshd[3904]: Accepted publickey for core from 10.0.0.1 port 50544 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:32.771396 sshd[3904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:32.775189 systemd-logind[1421]: New session 78 of user core. Feb 13 20:23:32.784857 systemd[1]: Started session-78.scope - Session 78 of User core. Feb 13 20:23:32.892184 sshd[3904]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:32.902099 systemd[1]: sshd@77-10.0.0.8:22-10.0.0.1:50544.service: Deactivated successfully. Feb 13 20:23:32.903548 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 20:23:32.904850 systemd-logind[1421]: Session 78 logged out. Waiting for processes to exit. Feb 13 20:23:32.913976 systemd[1]: Started sshd@78-10.0.0.8:22-10.0.0.1:50550.service - OpenSSH per-connection server daemon (10.0.0.1:50550). Feb 13 20:23:32.915078 systemd-logind[1421]: Removed session 78. Feb 13 20:23:32.946050 sshd[3919]: Accepted publickey for core from 10.0.0.1 port 50550 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:32.947334 sshd[3919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:32.950793 systemd-logind[1421]: New session 79 of user core. Feb 13 20:23:32.957893 systemd[1]: Started session-79.scope - Session 79 of User core. Feb 13 20:23:33.130878 sshd[3919]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:33.137001 systemd[1]: sshd@78-10.0.0.8:22-10.0.0.1:50550.service: Deactivated successfully. Feb 13 20:23:33.138316 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 20:23:33.139504 systemd-logind[1421]: Session 79 logged out. Waiting for processes to exit. Feb 13 20:23:33.140681 systemd[1]: Started sshd@79-10.0.0.8:22-10.0.0.1:50562.service - OpenSSH per-connection server daemon (10.0.0.1:50562). Feb 13 20:23:33.141387 systemd-logind[1421]: Removed session 79. Feb 13 20:23:33.175888 sshd[3932]: Accepted publickey for core from 10.0.0.1 port 50562 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:33.177501 sshd[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:33.180802 systemd-logind[1421]: New session 80 of user core. Feb 13 20:23:33.186911 systemd[1]: Started session-80.scope - Session 80 of User core. Feb 13 20:23:33.309163 kubelet[2436]: E0213 20:23:33.308822 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:33.764432 sshd[3932]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:33.772746 systemd[1]: sshd@79-10.0.0.8:22-10.0.0.1:50562.service: Deactivated successfully. Feb 13 20:23:33.774842 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 20:23:33.776404 systemd-logind[1421]: Session 80 logged out. Waiting for processes to exit. Feb 13 20:23:33.786977 systemd[1]: Started sshd@80-10.0.0.8:22-10.0.0.1:50572.service - OpenSSH per-connection server daemon (10.0.0.1:50572). Feb 13 20:23:33.787956 systemd-logind[1421]: Removed session 80. Feb 13 20:23:33.819615 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 50572 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:33.820877 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:33.824375 systemd-logind[1421]: New session 81 of user core. Feb 13 20:23:33.832836 systemd[1]: Started session-81.scope - Session 81 of User core. Feb 13 20:23:34.043871 sshd[3953]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:34.054581 systemd[1]: sshd@80-10.0.0.8:22-10.0.0.1:50572.service: Deactivated successfully. Feb 13 20:23:34.056080 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 20:23:34.057271 systemd-logind[1421]: Session 81 logged out. Waiting for processes to exit. Feb 13 20:23:34.058404 systemd[1]: Started sshd@81-10.0.0.8:22-10.0.0.1:50582.service - OpenSSH per-connection server daemon (10.0.0.1:50582). Feb 13 20:23:34.059268 systemd-logind[1421]: Removed session 81. Feb 13 20:23:34.093209 sshd[3965]: Accepted publickey for core from 10.0.0.1 port 50582 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:34.094366 sshd[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:34.097765 systemd-logind[1421]: New session 82 of user core. Feb 13 20:23:34.108835 systemd[1]: Started session-82.scope - Session 82 of User core. Feb 13 20:23:34.216153 sshd[3965]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:34.219389 systemd[1]: sshd@81-10.0.0.8:22-10.0.0.1:50582.service: Deactivated successfully. Feb 13 20:23:34.221262 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 20:23:34.221947 systemd-logind[1421]: Session 82 logged out. Waiting for processes to exit. Feb 13 20:23:34.222807 systemd-logind[1421]: Removed session 82. Feb 13 20:23:35.433154 kubelet[2436]: E0213 20:23:35.433100 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:38.309188 kubelet[2436]: E0213 20:23:38.309138 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:38.310120 kubelet[2436]: E0213 20:23:38.309919 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:23:39.226247 systemd[1]: Started sshd@82-10.0.0.8:22-10.0.0.1:50590.service - OpenSSH per-connection server daemon (10.0.0.1:50590). Feb 13 20:23:39.261519 sshd[3979]: Accepted publickey for core from 10.0.0.1 port 50590 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:39.262644 sshd[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:39.267815 systemd-logind[1421]: New session 83 of user core. Feb 13 20:23:39.276943 systemd[1]: Started session-83.scope - Session 83 of User core. Feb 13 20:23:39.309559 kubelet[2436]: E0213 20:23:39.309218 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:39.381202 sshd[3979]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:39.384342 systemd[1]: sshd@82-10.0.0.8:22-10.0.0.1:50590.service: Deactivated successfully. Feb 13 20:23:39.386101 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 20:23:39.386642 systemd-logind[1421]: Session 83 logged out. Waiting for processes to exit. Feb 13 20:23:39.387480 systemd-logind[1421]: Removed session 83. Feb 13 20:23:40.434238 kubelet[2436]: E0213 20:23:40.434198 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:41.131969 update_engine[1428]: I20250213 20:23:41.131881 1428 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:23:41.132310 update_engine[1428]: I20250213 20:23:41.132191 1428 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:23:41.132389 update_engine[1428]: I20250213 20:23:41.132352 1428 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:23:41.136035 update_engine[1428]: E20250213 20:23:41.135998 1428 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:23:41.136080 update_engine[1428]: I20250213 20:23:41.136050 1428 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 20:23:44.392211 systemd[1]: Started sshd@83-10.0.0.8:22-10.0.0.1:37474.service - OpenSSH per-connection server daemon (10.0.0.1:37474). Feb 13 20:23:44.427428 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 37474 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:44.428634 sshd[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:44.432584 systemd-logind[1421]: New session 84 of user core. Feb 13 20:23:44.447952 systemd[1]: Started session-84.scope - Session 84 of User core. Feb 13 20:23:44.554779 sshd[3998]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:44.558035 systemd[1]: sshd@83-10.0.0.8:22-10.0.0.1:37474.service: Deactivated successfully. Feb 13 20:23:44.560069 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 20:23:44.560717 systemd-logind[1421]: Session 84 logged out. Waiting for processes to exit. Feb 13 20:23:44.561684 systemd-logind[1421]: Removed session 84. Feb 13 20:23:45.435603 kubelet[2436]: E0213 20:23:45.435550 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:49.309558 kubelet[2436]: E0213 20:23:49.309519 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:49.565271 systemd[1]: Started sshd@84-10.0.0.8:22-10.0.0.1:37476.service - OpenSSH per-connection server daemon (10.0.0.1:37476). Feb 13 20:23:49.600703 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 37476 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:49.601897 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:49.605769 systemd-logind[1421]: New session 85 of user core. Feb 13 20:23:49.612845 systemd[1]: Started session-85.scope - Session 85 of User core. Feb 13 20:23:49.717113 sshd[4012]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:49.720084 systemd[1]: sshd@84-10.0.0.8:22-10.0.0.1:37476.service: Deactivated successfully. Feb 13 20:23:49.721675 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 20:23:49.722919 systemd-logind[1421]: Session 85 logged out. Waiting for processes to exit. Feb 13 20:23:49.723661 systemd-logind[1421]: Removed session 85. Feb 13 20:23:50.436554 kubelet[2436]: E0213 20:23:50.436495 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:51.131566 update_engine[1428]: I20250213 20:23:51.131482 1428 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:23:51.131925 update_engine[1428]: I20250213 20:23:51.131821 1428 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:23:51.132083 update_engine[1428]: I20250213 20:23:51.131978 1428 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:23:51.172045 update_engine[1428]: E20250213 20:23:51.171986 1428 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:23:51.172132 update_engine[1428]: I20250213 20:23:51.172055 1428 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:23:51.172132 update_engine[1428]: I20250213 20:23:51.172065 1428 omaha_request_action.cc:617] Omaha request response: Feb 13 20:23:51.172180 update_engine[1428]: E20250213 20:23:51.172152 1428 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 20:23:51.172180 update_engine[1428]: I20250213 20:23:51.172170 1428 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 20:23:51.172180 update_engine[1428]: I20250213 20:23:51.172176 1428 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:23:51.172239 update_engine[1428]: I20250213 20:23:51.172181 1428 update_attempter.cc:306] Processing Done. Feb 13 20:23:51.172239 update_engine[1428]: E20250213 20:23:51.172195 1428 update_attempter.cc:619] Update failed. Feb 13 20:23:51.172239 update_engine[1428]: I20250213 20:23:51.172200 1428 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 20:23:51.172239 update_engine[1428]: I20250213 20:23:51.172205 1428 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 20:23:51.172239 update_engine[1428]: I20250213 20:23:51.172210 1428 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 20:23:51.172333 update_engine[1428]: I20250213 20:23:51.172274 1428 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:23:51.172333 update_engine[1428]: I20250213 20:23:51.172294 1428 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:23:51.172333 update_engine[1428]: I20250213 20:23:51.172300 1428 omaha_request_action.cc:272] Request: Feb 13 20:23:51.172333 update_engine[1428]: Feb 13 20:23:51.172333 update_engine[1428]: Feb 13 20:23:51.172333 update_engine[1428]: Feb 13 20:23:51.172333 update_engine[1428]: Feb 13 20:23:51.172333 update_engine[1428]: Feb 13 20:23:51.172333 update_engine[1428]: Feb 13 20:23:51.172333 update_engine[1428]: I20250213 20:23:51.172305 1428 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:23:51.172516 update_engine[1428]: I20250213 20:23:51.172459 1428 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:23:51.172742 update_engine[1428]: I20250213 20:23:51.172590 1428 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:23:51.172791 locksmithd[1463]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 20:23:51.208886 update_engine[1428]: E20250213 20:23:51.208827 1428 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:23:51.208886 update_engine[1428]: I20250213 20:23:51.208884 1428 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:23:51.208886 update_engine[1428]: I20250213 20:23:51.208894 1428 omaha_request_action.cc:617] Omaha request response: Feb 13 20:23:51.208999 update_engine[1428]: I20250213 20:23:51.208900 1428 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:23:51.208999 update_engine[1428]: I20250213 20:23:51.208905 1428 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:23:51.208999 update_engine[1428]: I20250213 20:23:51.208909 1428 update_attempter.cc:306] Processing Done. Feb 13 20:23:51.208999 update_engine[1428]: I20250213 20:23:51.208914 1428 update_attempter.cc:310] Error event sent. Feb 13 20:23:51.208999 update_engine[1428]: I20250213 20:23:51.208922 1428 update_check_scheduler.cc:74] Next update check in 44m2s Feb 13 20:23:51.209250 locksmithd[1463]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 20:23:53.309326 kubelet[2436]: E0213 20:23:53.309017 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:53.311514 kubelet[2436]: E0213 20:23:53.310335 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:23:54.733161 systemd[1]: Started sshd@85-10.0.0.8:22-10.0.0.1:58486.service - OpenSSH per-connection server daemon (10.0.0.1:58486). Feb 13 20:23:54.773784 sshd[4026]: Accepted publickey for core from 10.0.0.1 port 58486 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:54.775121 sshd[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:54.780380 systemd-logind[1421]: New session 86 of user core. Feb 13 20:23:54.788929 systemd[1]: Started session-86.scope - Session 86 of User core. Feb 13 20:23:54.892309 sshd[4026]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:54.895610 systemd[1]: sshd@85-10.0.0.8:22-10.0.0.1:58486.service: Deactivated successfully. Feb 13 20:23:54.897334 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 20:23:54.897911 systemd-logind[1421]: Session 86 logged out. Waiting for processes to exit. Feb 13 20:23:54.898664 systemd-logind[1421]: Removed session 86. Feb 13 20:23:55.437273 kubelet[2436]: E0213 20:23:55.437225 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:59.909408 systemd[1]: Started sshd@86-10.0.0.8:22-10.0.0.1:58492.service - OpenSSH per-connection server daemon (10.0.0.1:58492). Feb 13 20:23:59.944628 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 58492 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:59.945842 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:59.949215 systemd-logind[1421]: New session 87 of user core. Feb 13 20:23:59.962129 systemd[1]: Started session-87.scope - Session 87 of User core. Feb 13 20:24:00.065038 sshd[4041]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:00.068161 systemd[1]: sshd@86-10.0.0.8:22-10.0.0.1:58492.service: Deactivated successfully. Feb 13 20:24:00.069940 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 20:24:00.071429 systemd-logind[1421]: Session 87 logged out. Waiting for processes to exit. Feb 13 20:24:00.072293 systemd-logind[1421]: Removed session 87. Feb 13 20:24:00.438697 kubelet[2436]: E0213 20:24:00.438661 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:05.076298 systemd[1]: Started sshd@87-10.0.0.8:22-10.0.0.1:37558.service - OpenSSH per-connection server daemon (10.0.0.1:37558). Feb 13 20:24:05.111775 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 37558 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:05.113001 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:05.116755 systemd-logind[1421]: New session 88 of user core. Feb 13 20:24:05.126844 systemd[1]: Started session-88.scope - Session 88 of User core. Feb 13 20:24:05.233260 sshd[4056]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:05.236406 systemd[1]: sshd@87-10.0.0.8:22-10.0.0.1:37558.service: Deactivated successfully. Feb 13 20:24:05.238087 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 20:24:05.238701 systemd-logind[1421]: Session 88 logged out. Waiting for processes to exit. Feb 13 20:24:05.239548 systemd-logind[1421]: Removed session 88. Feb 13 20:24:05.440117 kubelet[2436]: E0213 20:24:05.440091 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:06.309191 kubelet[2436]: E0213 20:24:06.309095 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:06.309670 kubelet[2436]: E0213 20:24:06.309639 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:24:10.247267 systemd[1]: Started sshd@88-10.0.0.8:22-10.0.0.1:37562.service - OpenSSH per-connection server daemon (10.0.0.1:37562). Feb 13 20:24:10.282102 sshd[4073]: Accepted publickey for core from 10.0.0.1 port 37562 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:10.283363 sshd[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:10.287008 systemd-logind[1421]: New session 89 of user core. Feb 13 20:24:10.298847 systemd[1]: Started session-89.scope - Session 89 of User core. Feb 13 20:24:10.400847 sshd[4073]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:10.403926 systemd[1]: sshd@88-10.0.0.8:22-10.0.0.1:37562.service: Deactivated successfully. Feb 13 20:24:10.406381 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 20:24:10.407479 systemd-logind[1421]: Session 89 logged out. Waiting for processes to exit. Feb 13 20:24:10.408658 systemd-logind[1421]: Removed session 89. Feb 13 20:24:10.441801 kubelet[2436]: E0213 20:24:10.441750 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:15.413499 systemd[1]: Started sshd@89-10.0.0.8:22-10.0.0.1:35338.service - OpenSSH per-connection server daemon (10.0.0.1:35338). Feb 13 20:24:15.442672 kubelet[2436]: E0213 20:24:15.442636 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:15.448680 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 35338 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:15.450027 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:15.453410 systemd-logind[1421]: New session 90 of user core. Feb 13 20:24:15.463929 systemd[1]: Started session-90.scope - Session 90 of User core. Feb 13 20:24:15.567605 sshd[4089]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:15.570751 systemd[1]: sshd@89-10.0.0.8:22-10.0.0.1:35338.service: Deactivated successfully. Feb 13 20:24:15.573328 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 20:24:15.574195 systemd-logind[1421]: Session 90 logged out. Waiting for processes to exit. Feb 13 20:24:15.575063 systemd-logind[1421]: Removed session 90. Feb 13 20:24:19.309276 kubelet[2436]: E0213 20:24:19.309206 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:19.309964 kubelet[2436]: E0213 20:24:19.309718 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:24:20.443807 kubelet[2436]: E0213 20:24:20.443749 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:20.582376 systemd[1]: Started sshd@90-10.0.0.8:22-10.0.0.1:35354.service - OpenSSH per-connection server daemon (10.0.0.1:35354). Feb 13 20:24:20.617501 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 35354 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:20.618775 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:20.622242 systemd-logind[1421]: New session 91 of user core. Feb 13 20:24:20.634841 systemd[1]: Started session-91.scope - Session 91 of User core. Feb 13 20:24:20.738080 sshd[4104]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:20.741204 systemd[1]: sshd@90-10.0.0.8:22-10.0.0.1:35354.service: Deactivated successfully. Feb 13 20:24:20.743095 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 20:24:20.744103 systemd-logind[1421]: Session 91 logged out. Waiting for processes to exit. Feb 13 20:24:20.744996 systemd-logind[1421]: Removed session 91. Feb 13 20:24:25.444811 kubelet[2436]: E0213 20:24:25.444752 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:25.749097 systemd[1]: Started sshd@91-10.0.0.8:22-10.0.0.1:56016.service - OpenSSH per-connection server daemon (10.0.0.1:56016). Feb 13 20:24:25.784525 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 56016 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:25.787478 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:25.791523 systemd-logind[1421]: New session 92 of user core. Feb 13 20:24:25.801846 systemd[1]: Started session-92.scope - Session 92 of User core. Feb 13 20:24:25.903466 sshd[4119]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:25.906627 systemd[1]: sshd@91-10.0.0.8:22-10.0.0.1:56016.service: Deactivated successfully. Feb 13 20:24:25.909125 systemd[1]: session-92.scope: Deactivated successfully. Feb 13 20:24:25.909798 systemd-logind[1421]: Session 92 logged out. Waiting for processes to exit. Feb 13 20:24:25.910515 systemd-logind[1421]: Removed session 92. Feb 13 20:24:30.445977 kubelet[2436]: E0213 20:24:30.445922 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:30.917185 systemd[1]: Started sshd@92-10.0.0.8:22-10.0.0.1:56018.service - OpenSSH per-connection server daemon (10.0.0.1:56018). Feb 13 20:24:30.952177 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 56018 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:30.953295 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:30.956854 systemd-logind[1421]: New session 93 of user core. Feb 13 20:24:30.966935 systemd[1]: Started session-93.scope - Session 93 of User core. Feb 13 20:24:31.073086 sshd[4134]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:31.076438 systemd[1]: sshd@92-10.0.0.8:22-10.0.0.1:56018.service: Deactivated successfully. Feb 13 20:24:31.078028 systemd[1]: session-93.scope: Deactivated successfully. Feb 13 20:24:31.079285 systemd-logind[1421]: Session 93 logged out. Waiting for processes to exit. Feb 13 20:24:31.080090 systemd-logind[1421]: Removed session 93. Feb 13 20:24:32.309557 kubelet[2436]: E0213 20:24:32.309441 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:32.310162 kubelet[2436]: E0213 20:24:32.310113 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:24:35.447234 kubelet[2436]: E0213 20:24:35.447184 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:36.083175 systemd[1]: Started sshd@93-10.0.0.8:22-10.0.0.1:58324.service - OpenSSH per-connection server daemon (10.0.0.1:58324). Feb 13 20:24:36.118104 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 58324 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:36.119256 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:36.122770 systemd-logind[1421]: New session 94 of user core. Feb 13 20:24:36.127857 systemd[1]: Started session-94.scope - Session 94 of User core. Feb 13 20:24:36.235656 sshd[4148]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:36.238866 systemd[1]: sshd@93-10.0.0.8:22-10.0.0.1:58324.service: Deactivated successfully. Feb 13 20:24:36.241311 systemd[1]: session-94.scope: Deactivated successfully. Feb 13 20:24:36.241944 systemd-logind[1421]: Session 94 logged out. Waiting for processes to exit. Feb 13 20:24:36.242722 systemd-logind[1421]: Removed session 94. Feb 13 20:24:40.448410 kubelet[2436]: E0213 20:24:40.448351 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:41.246319 systemd[1]: Started sshd@94-10.0.0.8:22-10.0.0.1:58338.service - OpenSSH per-connection server daemon (10.0.0.1:58338). Feb 13 20:24:41.281426 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 58338 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:41.282586 sshd[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:41.286207 systemd-logind[1421]: New session 95 of user core. Feb 13 20:24:41.295846 systemd[1]: Started session-95.scope - Session 95 of User core. Feb 13 20:24:41.311429 kubelet[2436]: E0213 20:24:41.311110 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:41.399488 sshd[4163]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:41.402600 systemd[1]: sshd@94-10.0.0.8:22-10.0.0.1:58338.service: Deactivated successfully. Feb 13 20:24:41.405209 systemd[1]: session-95.scope: Deactivated successfully. Feb 13 20:24:41.406276 systemd-logind[1421]: Session 95 logged out. Waiting for processes to exit. Feb 13 20:24:41.407458 systemd-logind[1421]: Removed session 95. Feb 13 20:24:45.309627 kubelet[2436]: E0213 20:24:45.309281 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:45.310733 kubelet[2436]: E0213 20:24:45.310330 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:24:45.450194 kubelet[2436]: E0213 20:24:45.450126 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:46.410422 systemd[1]: Started sshd@95-10.0.0.8:22-10.0.0.1:38284.service - OpenSSH per-connection server daemon (10.0.0.1:38284). Feb 13 20:24:46.445796 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 38284 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:46.447050 sshd[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:46.450168 systemd-logind[1421]: New session 96 of user core. Feb 13 20:24:46.464929 systemd[1]: Started session-96.scope - Session 96 of User core. Feb 13 20:24:46.569961 sshd[4185]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:46.573121 systemd[1]: sshd@95-10.0.0.8:22-10.0.0.1:38284.service: Deactivated successfully. Feb 13 20:24:46.575198 systemd[1]: session-96.scope: Deactivated successfully. Feb 13 20:24:46.575879 systemd-logind[1421]: Session 96 logged out. Waiting for processes to exit. Feb 13 20:24:46.576956 systemd-logind[1421]: Removed session 96. Feb 13 20:24:50.451260 kubelet[2436]: E0213 20:24:50.451215 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:51.580189 systemd[1]: Started sshd@96-10.0.0.8:22-10.0.0.1:38286.service - OpenSSH per-connection server daemon (10.0.0.1:38286). Feb 13 20:24:51.615377 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 38286 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:51.616514 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:51.620168 systemd-logind[1421]: New session 97 of user core. Feb 13 20:24:51.625858 systemd[1]: Started session-97.scope - Session 97 of User core. Feb 13 20:24:51.729416 sshd[4200]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:51.732636 systemd[1]: sshd@96-10.0.0.8:22-10.0.0.1:38286.service: Deactivated successfully. Feb 13 20:24:51.734249 systemd[1]: session-97.scope: Deactivated successfully. Feb 13 20:24:51.734839 systemd-logind[1421]: Session 97 logged out. Waiting for processes to exit. Feb 13 20:24:51.735585 systemd-logind[1421]: Removed session 97. Feb 13 20:24:55.309289 kubelet[2436]: E0213 20:24:55.309248 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:55.451996 kubelet[2436]: E0213 20:24:55.451954 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:56.740217 systemd[1]: Started sshd@97-10.0.0.8:22-10.0.0.1:36992.service - OpenSSH per-connection server daemon (10.0.0.1:36992). Feb 13 20:24:56.775030 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 36992 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:56.776271 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:56.779693 systemd-logind[1421]: New session 98 of user core. Feb 13 20:24:56.789913 systemd[1]: Started session-98.scope - Session 98 of User core. Feb 13 20:24:56.892086 sshd[4215]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:56.895341 systemd[1]: sshd@97-10.0.0.8:22-10.0.0.1:36992.service: Deactivated successfully. Feb 13 20:24:56.897994 systemd[1]: session-98.scope: Deactivated successfully. Feb 13 20:24:56.898925 systemd-logind[1421]: Session 98 logged out. Waiting for processes to exit. Feb 13 20:24:56.899777 systemd-logind[1421]: Removed session 98. Feb 13 20:25:00.309390 kubelet[2436]: E0213 20:25:00.309357 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:00.310181 kubelet[2436]: E0213 20:25:00.310074 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:25:00.453347 kubelet[2436]: E0213 20:25:00.453312 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:01.310328 kubelet[2436]: E0213 20:25:01.310291 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:01.902225 systemd[1]: Started sshd@98-10.0.0.8:22-10.0.0.1:37006.service - OpenSSH per-connection server daemon (10.0.0.1:37006). Feb 13 20:25:01.937143 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 37006 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:01.938297 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:01.941766 systemd-logind[1421]: New session 99 of user core. Feb 13 20:25:01.947834 systemd[1]: Started session-99.scope - Session 99 of User core. Feb 13 20:25:02.050826 sshd[4229]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:02.054277 systemd[1]: sshd@98-10.0.0.8:22-10.0.0.1:37006.service: Deactivated successfully. Feb 13 20:25:02.056250 systemd[1]: session-99.scope: Deactivated successfully. Feb 13 20:25:02.057053 systemd-logind[1421]: Session 99 logged out. Waiting for processes to exit. Feb 13 20:25:02.057927 systemd-logind[1421]: Removed session 99. Feb 13 20:25:05.311308 kubelet[2436]: E0213 20:25:05.311275 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:05.454073 kubelet[2436]: E0213 20:25:05.454022 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:07.066191 systemd[1]: Started sshd@99-10.0.0.8:22-10.0.0.1:52364.service - OpenSSH per-connection server daemon (10.0.0.1:52364). Feb 13 20:25:07.101429 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 52364 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:07.102605 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:07.106294 systemd-logind[1421]: New session 100 of user core. Feb 13 20:25:07.115841 systemd[1]: Started session-100.scope - Session 100 of User core. Feb 13 20:25:07.221618 sshd[4245]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:07.224822 systemd[1]: sshd@99-10.0.0.8:22-10.0.0.1:52364.service: Deactivated successfully. Feb 13 20:25:07.227177 systemd[1]: session-100.scope: Deactivated successfully. Feb 13 20:25:07.227969 systemd-logind[1421]: Session 100 logged out. Waiting for processes to exit. Feb 13 20:25:07.228793 systemd-logind[1421]: Removed session 100. Feb 13 20:25:10.455331 kubelet[2436]: E0213 20:25:10.455286 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:12.232221 systemd[1]: Started sshd@100-10.0.0.8:22-10.0.0.1:52380.service - OpenSSH per-connection server daemon (10.0.0.1:52380). Feb 13 20:25:12.267477 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 52380 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:12.268697 sshd[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:12.273448 systemd-logind[1421]: New session 101 of user core. Feb 13 20:25:12.283850 systemd[1]: Started session-101.scope - Session 101 of User core. Feb 13 20:25:12.388156 sshd[4259]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:12.390766 systemd[1]: sshd@100-10.0.0.8:22-10.0.0.1:52380.service: Deactivated successfully. Feb 13 20:25:12.392344 systemd[1]: session-101.scope: Deactivated successfully. Feb 13 20:25:12.393753 systemd-logind[1421]: Session 101 logged out. Waiting for processes to exit. Feb 13 20:25:12.394547 systemd-logind[1421]: Removed session 101. Feb 13 20:25:13.308918 kubelet[2436]: E0213 20:25:13.308879 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:13.309804 kubelet[2436]: E0213 20:25:13.309767 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:25:15.455809 kubelet[2436]: E0213 20:25:15.455765 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:17.402133 systemd[1]: Started sshd@101-10.0.0.8:22-10.0.0.1:36120.service - OpenSSH per-connection server daemon (10.0.0.1:36120). Feb 13 20:25:17.438314 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 36120 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:17.439510 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:17.442667 systemd-logind[1421]: New session 102 of user core. Feb 13 20:25:17.454915 systemd[1]: Started session-102.scope - Session 102 of User core. Feb 13 20:25:17.559076 sshd[4275]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:17.562141 systemd[1]: sshd@101-10.0.0.8:22-10.0.0.1:36120.service: Deactivated successfully. Feb 13 20:25:17.564313 systemd[1]: session-102.scope: Deactivated successfully. Feb 13 20:25:17.565284 systemd-logind[1421]: Session 102 logged out. Waiting for processes to exit. Feb 13 20:25:17.566123 systemd-logind[1421]: Removed session 102. Feb 13 20:25:20.457495 kubelet[2436]: E0213 20:25:20.457445 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:22.569247 systemd[1]: Started sshd@102-10.0.0.8:22-10.0.0.1:38504.service - OpenSSH per-connection server daemon (10.0.0.1:38504). Feb 13 20:25:22.604045 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 38504 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:22.605271 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:22.609037 systemd-logind[1421]: New session 103 of user core. Feb 13 20:25:22.619837 systemd[1]: Started session-103.scope - Session 103 of User core. Feb 13 20:25:22.719423 sshd[4289]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:22.722528 systemd[1]: sshd@102-10.0.0.8:22-10.0.0.1:38504.service: Deactivated successfully. Feb 13 20:25:22.724099 systemd[1]: session-103.scope: Deactivated successfully. Feb 13 20:25:22.725271 systemd-logind[1421]: Session 103 logged out. Waiting for processes to exit. Feb 13 20:25:22.726079 systemd-logind[1421]: Removed session 103. Feb 13 20:25:25.458963 kubelet[2436]: E0213 20:25:25.458926 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:26.309054 kubelet[2436]: E0213 20:25:26.308889 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:26.309579 kubelet[2436]: E0213 20:25:26.309521 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:25:27.733211 systemd[1]: Started sshd@103-10.0.0.8:22-10.0.0.1:38510.service - OpenSSH per-connection server daemon (10.0.0.1:38510). Feb 13 20:25:27.768161 sshd[4303]: Accepted publickey for core from 10.0.0.1 port 38510 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:27.769363 sshd[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:27.773046 systemd-logind[1421]: New session 104 of user core. Feb 13 20:25:27.776841 systemd[1]: Started session-104.scope - Session 104 of User core. Feb 13 20:25:27.881074 sshd[4303]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:27.884157 systemd[1]: sshd@103-10.0.0.8:22-10.0.0.1:38510.service: Deactivated successfully. Feb 13 20:25:27.886914 systemd[1]: session-104.scope: Deactivated successfully. Feb 13 20:25:27.888100 systemd-logind[1421]: Session 104 logged out. Waiting for processes to exit. Feb 13 20:25:27.889381 systemd-logind[1421]: Removed session 104. Feb 13 20:25:30.460207 kubelet[2436]: E0213 20:25:30.460101 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:32.891523 systemd[1]: Started sshd@104-10.0.0.8:22-10.0.0.1:46476.service - OpenSSH per-connection server daemon (10.0.0.1:46476). Feb 13 20:25:32.926611 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 46476 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:32.927832 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:32.931139 systemd-logind[1421]: New session 105 of user core. Feb 13 20:25:32.939905 systemd[1]: Started session-105.scope - Session 105 of User core. Feb 13 20:25:33.043332 sshd[4317]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:33.046590 systemd[1]: sshd@104-10.0.0.8:22-10.0.0.1:46476.service: Deactivated successfully. Feb 13 20:25:33.048338 systemd[1]: session-105.scope: Deactivated successfully. Feb 13 20:25:33.050731 systemd-logind[1421]: Session 105 logged out. Waiting for processes to exit. Feb 13 20:25:33.051670 systemd-logind[1421]: Removed session 105. Feb 13 20:25:35.461656 kubelet[2436]: E0213 20:25:35.461600 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:38.055146 systemd[1]: Started sshd@105-10.0.0.8:22-10.0.0.1:46486.service - OpenSSH per-connection server daemon (10.0.0.1:46486). Feb 13 20:25:38.090622 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 46486 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:38.091796 sshd[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:38.095768 systemd-logind[1421]: New session 106 of user core. Feb 13 20:25:38.102865 systemd[1]: Started session-106.scope - Session 106 of User core. Feb 13 20:25:38.206999 sshd[4331]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:38.210271 systemd[1]: sshd@105-10.0.0.8:22-10.0.0.1:46486.service: Deactivated successfully. Feb 13 20:25:38.212510 systemd[1]: session-106.scope: Deactivated successfully. Feb 13 20:25:38.213437 systemd-logind[1421]: Session 106 logged out. Waiting for processes to exit. Feb 13 20:25:38.214327 systemd-logind[1421]: Removed session 106. Feb 13 20:25:39.309314 kubelet[2436]: E0213 20:25:39.309096 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:39.309801 kubelet[2436]: E0213 20:25:39.309765 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:25:40.462505 kubelet[2436]: E0213 20:25:40.462443 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:43.218242 systemd[1]: Started sshd@106-10.0.0.8:22-10.0.0.1:54048.service - OpenSSH per-connection server daemon (10.0.0.1:54048). Feb 13 20:25:43.253197 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 54048 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:43.254376 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:43.257679 systemd-logind[1421]: New session 107 of user core. Feb 13 20:25:43.264853 systemd[1]: Started session-107.scope - Session 107 of User core. Feb 13 20:25:43.368526 sshd[4345]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:43.371739 systemd[1]: sshd@106-10.0.0.8:22-10.0.0.1:54048.service: Deactivated successfully. Feb 13 20:25:43.373671 systemd[1]: session-107.scope: Deactivated successfully. Feb 13 20:25:43.374505 systemd-logind[1421]: Session 107 logged out. Waiting for processes to exit. Feb 13 20:25:43.375329 systemd-logind[1421]: Removed session 107. Feb 13 20:25:45.464102 kubelet[2436]: E0213 20:25:45.464055 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:48.379353 systemd[1]: Started sshd@107-10.0.0.8:22-10.0.0.1:54062.service - OpenSSH per-connection server daemon (10.0.0.1:54062). Feb 13 20:25:48.414165 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 54062 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:48.415392 sshd[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:48.419137 systemd-logind[1421]: New session 108 of user core. Feb 13 20:25:48.429860 systemd[1]: Started session-108.scope - Session 108 of User core. Feb 13 20:25:48.533394 sshd[4362]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:48.536950 systemd[1]: sshd@107-10.0.0.8:22-10.0.0.1:54062.service: Deactivated successfully. Feb 13 20:25:48.538571 systemd[1]: session-108.scope: Deactivated successfully. Feb 13 20:25:48.539642 systemd-logind[1421]: Session 108 logged out. Waiting for processes to exit. Feb 13 20:25:48.540703 systemd-logind[1421]: Removed session 108. Feb 13 20:25:50.465757 kubelet[2436]: E0213 20:25:50.465687 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:51.308964 kubelet[2436]: E0213 20:25:51.308922 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:51.309719 kubelet[2436]: E0213 20:25:51.309666 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:25:53.544272 systemd[1]: Started sshd@108-10.0.0.8:22-10.0.0.1:49748.service - OpenSSH per-connection server daemon (10.0.0.1:49748). Feb 13 20:25:53.579305 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 49748 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:53.580432 sshd[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:53.583663 systemd-logind[1421]: New session 109 of user core. Feb 13 20:25:53.596885 systemd[1]: Started session-109.scope - Session 109 of User core. Feb 13 20:25:53.697865 sshd[4377]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:53.700403 systemd[1]: sshd@108-10.0.0.8:22-10.0.0.1:49748.service: Deactivated successfully. Feb 13 20:25:53.701977 systemd[1]: session-109.scope: Deactivated successfully. Feb 13 20:25:53.703223 systemd-logind[1421]: Session 109 logged out. Waiting for processes to exit. Feb 13 20:25:53.704131 systemd-logind[1421]: Removed session 109. Feb 13 20:25:55.466421 kubelet[2436]: E0213 20:25:55.466381 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:58.709217 systemd[1]: Started sshd@109-10.0.0.8:22-10.0.0.1:49762.service - OpenSSH per-connection server daemon (10.0.0.1:49762). Feb 13 20:25:58.744070 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 49762 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:58.745158 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:58.748344 systemd-logind[1421]: New session 110 of user core. Feb 13 20:25:58.754840 systemd[1]: Started session-110.scope - Session 110 of User core. Feb 13 20:25:58.857434 sshd[4391]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:58.860484 systemd[1]: sshd@109-10.0.0.8:22-10.0.0.1:49762.service: Deactivated successfully. Feb 13 20:25:58.862133 systemd[1]: session-110.scope: Deactivated successfully. Feb 13 20:25:58.863470 systemd-logind[1421]: Session 110 logged out. Waiting for processes to exit. Feb 13 20:25:58.864322 systemd-logind[1421]: Removed session 110. Feb 13 20:26:00.467485 kubelet[2436]: E0213 20:26:00.467424 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:03.871145 systemd[1]: Started sshd@110-10.0.0.8:22-10.0.0.1:37616.service - OpenSSH per-connection server daemon (10.0.0.1:37616). Feb 13 20:26:03.905867 sshd[4405]: Accepted publickey for core from 10.0.0.1 port 37616 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:03.907056 sshd[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:03.910786 systemd-logind[1421]: New session 111 of user core. Feb 13 20:26:03.917842 systemd[1]: Started session-111.scope - Session 111 of User core. Feb 13 20:26:04.019803 sshd[4405]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:04.022983 systemd[1]: sshd@110-10.0.0.8:22-10.0.0.1:37616.service: Deactivated successfully. Feb 13 20:26:04.024657 systemd[1]: session-111.scope: Deactivated successfully. Feb 13 20:26:04.025617 systemd-logind[1421]: Session 111 logged out. Waiting for processes to exit. Feb 13 20:26:04.026379 systemd-logind[1421]: Removed session 111. Feb 13 20:26:04.308986 kubelet[2436]: E0213 20:26:04.308958 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:26:04.309615 kubelet[2436]: E0213 20:26:04.309422 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:26:05.468155 kubelet[2436]: E0213 20:26:05.468116 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:08.308905 kubelet[2436]: E0213 20:26:08.308814 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:26:08.308905 kubelet[2436]: E0213 20:26:08.308899 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:26:09.030093 systemd[1]: Started sshd@111-10.0.0.8:22-10.0.0.1:37628.service - OpenSSH per-connection server daemon (10.0.0.1:37628). Feb 13 20:26:09.065069 sshd[4422]: Accepted publickey for core from 10.0.0.1 port 37628 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:09.066247 sshd[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:09.069758 systemd-logind[1421]: New session 112 of user core. Feb 13 20:26:09.079838 systemd[1]: Started session-112.scope - Session 112 of User core. Feb 13 20:26:09.181791 sshd[4422]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:09.184762 systemd[1]: sshd@111-10.0.0.8:22-10.0.0.1:37628.service: Deactivated successfully. Feb 13 20:26:09.186338 systemd[1]: session-112.scope: Deactivated successfully. Feb 13 20:26:09.187693 systemd-logind[1421]: Session 112 logged out. Waiting for processes to exit. Feb 13 20:26:09.188939 systemd-logind[1421]: Removed session 112. Feb 13 20:26:10.468807 kubelet[2436]: E0213 20:26:10.468768 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:14.193137 systemd[1]: Started sshd@112-10.0.0.8:22-10.0.0.1:43664.service - OpenSSH per-connection server daemon (10.0.0.1:43664). Feb 13 20:26:14.227846 sshd[4439]: Accepted publickey for core from 10.0.0.1 port 43664 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:14.229128 sshd[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:14.232357 systemd-logind[1421]: New session 113 of user core. Feb 13 20:26:14.240841 systemd[1]: Started session-113.scope - Session 113 of User core. Feb 13 20:26:14.343414 sshd[4439]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:14.346332 systemd[1]: sshd@112-10.0.0.8:22-10.0.0.1:43664.service: Deactivated successfully. Feb 13 20:26:14.349064 systemd[1]: session-113.scope: Deactivated successfully. Feb 13 20:26:14.349582 systemd-logind[1421]: Session 113 logged out. Waiting for processes to exit. Feb 13 20:26:14.350278 systemd-logind[1421]: Removed session 113. Feb 13 20:26:15.470216 kubelet[2436]: E0213 20:26:15.470168 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:19.309750 kubelet[2436]: E0213 20:26:19.309613 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:26:19.310597 kubelet[2436]: E0213 20:26:19.310344 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:26:19.354032 systemd[1]: Started sshd@113-10.0.0.8:22-10.0.0.1:43672.service - OpenSSH per-connection server daemon (10.0.0.1:43672). Feb 13 20:26:19.390187 sshd[4453]: Accepted publickey for core from 10.0.0.1 port 43672 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:19.391390 sshd[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:19.394740 systemd-logind[1421]: New session 114 of user core. Feb 13 20:26:19.400835 systemd[1]: Started session-114.scope - Session 114 of User core. Feb 13 20:26:19.505877 sshd[4453]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:19.508930 systemd[1]: sshd@113-10.0.0.8:22-10.0.0.1:43672.service: Deactivated successfully. Feb 13 20:26:19.511393 systemd[1]: session-114.scope: Deactivated successfully. Feb 13 20:26:19.512283 systemd-logind[1421]: Session 114 logged out. Waiting for processes to exit. Feb 13 20:26:19.513185 systemd-logind[1421]: Removed session 114. Feb 13 20:26:20.471377 kubelet[2436]: E0213 20:26:20.471329 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:23.309559 kubelet[2436]: E0213 20:26:23.309517 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:26:24.309095 kubelet[2436]: E0213 20:26:24.309048 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:26:24.516198 systemd[1]: Started sshd@114-10.0.0.8:22-10.0.0.1:39820.service - OpenSSH per-connection server daemon (10.0.0.1:39820). Feb 13 20:26:24.551448 sshd[4467]: Accepted publickey for core from 10.0.0.1 port 39820 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:24.552591 sshd[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:24.555805 systemd-logind[1421]: New session 115 of user core. Feb 13 20:26:24.562847 systemd[1]: Started session-115.scope - Session 115 of User core. Feb 13 20:26:24.665916 sshd[4467]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:24.669035 systemd[1]: sshd@114-10.0.0.8:22-10.0.0.1:39820.service: Deactivated successfully. Feb 13 20:26:24.671078 systemd[1]: session-115.scope: Deactivated successfully. Feb 13 20:26:24.671627 systemd-logind[1421]: Session 115 logged out. Waiting for processes to exit. Feb 13 20:26:24.672629 systemd-logind[1421]: Removed session 115. Feb 13 20:26:25.472701 kubelet[2436]: E0213 20:26:25.472669 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:29.676337 systemd[1]: Started sshd@115-10.0.0.8:22-10.0.0.1:39828.service - OpenSSH per-connection server daemon (10.0.0.1:39828). Feb 13 20:26:29.711540 sshd[4484]: Accepted publickey for core from 10.0.0.1 port 39828 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:29.712799 sshd[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:29.717251 systemd-logind[1421]: New session 116 of user core. Feb 13 20:26:29.729842 systemd[1]: Started session-116.scope - Session 116 of User core. Feb 13 20:26:29.830647 sshd[4484]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:29.833077 systemd[1]: sshd@115-10.0.0.8:22-10.0.0.1:39828.service: Deactivated successfully. Feb 13 20:26:29.834630 systemd[1]: session-116.scope: Deactivated successfully. Feb 13 20:26:29.835864 systemd-logind[1421]: Session 116 logged out. Waiting for processes to exit. Feb 13 20:26:29.836931 systemd-logind[1421]: Removed session 116. Feb 13 20:26:30.474350 kubelet[2436]: E0213 20:26:30.474251 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:31.310285 kubelet[2436]: E0213 20:26:31.310250 2436 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:26:31.310974 kubelet[2436]: E0213 20:26:31.310887 2436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-l2mj5" podUID="71c6d29d-8b7d-4b8f-92a3-710fe670a99c" Feb 13 20:26:34.845410 systemd[1]: Started sshd@116-10.0.0.8:22-10.0.0.1:37140.service - OpenSSH per-connection server daemon (10.0.0.1:37140). Feb 13 20:26:34.880479 sshd[4501]: Accepted publickey for core from 10.0.0.1 port 37140 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:34.881674 sshd[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:34.885384 systemd-logind[1421]: New session 117 of user core. Feb 13 20:26:34.900895 systemd[1]: Started session-117.scope - Session 117 of User core. Feb 13 20:26:35.003127 sshd[4501]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:35.006346 systemd[1]: sshd@116-10.0.0.8:22-10.0.0.1:37140.service: Deactivated successfully. Feb 13 20:26:35.008564 systemd[1]: session-117.scope: Deactivated successfully. Feb 13 20:26:35.009554 systemd-logind[1421]: Session 117 logged out. Waiting for processes to exit. Feb 13 20:26:35.010780 systemd-logind[1421]: Removed session 117. Feb 13 20:26:35.475101 kubelet[2436]: E0213 20:26:35.475053 2436 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"