Feb 13 20:50:00.895940 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:50:00.895961 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 20:50:00.895971 kernel: KASLR enabled Feb 13 20:50:00.895977 kernel: efi: EFI v2.7 by EDK II Feb 13 20:50:00.895984 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 20:50:00.895990 kernel: random: crng init done Feb 13 20:50:00.895997 kernel: ACPI: Early table checksum verification disabled Feb 13 20:50:00.896003 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 20:50:00.896010 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 20:50:00.896018 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.896024 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.896030 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.896044 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.896050 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.896058 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.896067 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.896073 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.896080 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:50:00.896087 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 20:50:00.896093 kernel: NUMA: Failed to initialise from firmware Feb 13 20:50:00.896100 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:50:00.896107 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 20:50:00.896114 kernel: Zone ranges: Feb 13 20:50:00.896120 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:50:00.896127 kernel: DMA32 empty Feb 13 20:50:00.896135 kernel: Normal empty Feb 13 20:50:00.896141 kernel: Movable zone start for each node Feb 13 20:50:00.896148 kernel: Early memory node ranges Feb 13 20:50:00.896155 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 20:50:00.896162 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 20:50:00.896168 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 20:50:00.896175 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 20:50:00.896182 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 20:50:00.896188 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 20:50:00.896195 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 20:50:00.896202 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:50:00.896208 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 20:50:00.896217 kernel: psci: probing for conduit method from ACPI. Feb 13 20:50:00.896223 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:50:00.896230 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:50:00.896240 kernel: psci: Trusted OS migration not required Feb 13 20:50:00.896247 kernel: psci: SMC Calling Convention v1.1 Feb 13 20:50:00.896254 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 20:50:00.896263 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:50:00.896270 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:50:00.896277 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 20:50:00.896284 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:50:00.896291 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:50:00.896299 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:50:00.896306 kernel: CPU features: detected: Spectre-v4 Feb 13 20:50:00.896313 kernel: CPU features: detected: Spectre-BHB Feb 13 20:50:00.896320 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:50:00.896327 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:50:00.896335 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:50:00.896342 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:50:00.896350 kernel: alternatives: applying boot alternatives Feb 13 20:50:00.896358 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:50:00.896366 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:50:00.896373 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:50:00.896380 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:50:00.896387 kernel: Fallback order for Node 0: 0 Feb 13 20:50:00.896394 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 20:50:00.896401 kernel: Policy zone: DMA Feb 13 20:50:00.896408 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:50:00.896417 kernel: software IO TLB: area num 4. Feb 13 20:50:00.896424 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 20:50:00.896432 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Feb 13 20:50:00.896439 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 20:50:00.896446 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:50:00.896454 kernel: rcu: RCU event tracing is enabled. Feb 13 20:50:00.896461 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 20:50:00.896468 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:50:00.896476 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:50:00.896483 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:50:00.896490 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 20:50:00.896497 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:50:00.896506 kernel: GICv3: 256 SPIs implemented Feb 13 20:50:00.896513 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:50:00.896520 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:50:00.896527 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:50:00.896534 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 20:50:00.896541 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 20:50:00.896556 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 20:50:00.896564 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 20:50:00.896572 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 20:50:00.896579 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 20:50:00.896586 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:50:00.896596 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:50:00.896603 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:50:00.896610 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:50:00.896618 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:50:00.896625 kernel: arm-pv: using stolen time PV Feb 13 20:50:00.896633 kernel: Console: colour dummy device 80x25 Feb 13 20:50:00.896640 kernel: ACPI: Core revision 20230628 Feb 13 20:50:00.896647 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:50:00.896655 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:50:00.896662 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:50:00.896670 kernel: landlock: Up and running. Feb 13 20:50:00.896678 kernel: SELinux: Initializing. Feb 13 20:50:00.896685 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:50:00.896693 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:50:00.896700 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:50:00.896707 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:50:00.896715 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:50:00.896722 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:50:00.896729 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 20:50:00.896738 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 20:50:00.896746 kernel: Remapping and enabling EFI services. Feb 13 20:50:00.896753 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:50:00.896760 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:50:00.896767 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 20:50:00.896775 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 20:50:00.896783 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:50:00.896790 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:50:00.896798 kernel: Detected PIPT I-cache on CPU2 Feb 13 20:50:00.896806 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 20:50:00.896814 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 20:50:00.896822 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:50:00.896835 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 20:50:00.896844 kernel: Detected PIPT I-cache on CPU3 Feb 13 20:50:00.896852 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 20:50:00.896860 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 20:50:00.896868 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:50:00.896875 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 20:50:00.896883 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 20:50:00.896895 kernel: SMP: Total of 4 processors activated. Feb 13 20:50:00.896905 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:50:00.896915 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:50:00.896924 kernel: CPU features: detected: Common not Private translations Feb 13 20:50:00.896939 kernel: CPU features: detected: CRC32 instructions Feb 13 20:50:00.896953 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 20:50:00.896961 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:50:00.896969 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:50:00.896978 kernel: CPU features: detected: Privileged Access Never Feb 13 20:50:00.896986 kernel: CPU features: detected: RAS Extension Support Feb 13 20:50:00.896994 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 20:50:00.897002 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:50:00.897010 kernel: alternatives: applying system-wide alternatives Feb 13 20:50:00.897017 kernel: devtmpfs: initialized Feb 13 20:50:00.897025 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:50:00.897033 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 20:50:00.897046 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:50:00.897056 kernel: SMBIOS 3.0.0 present. Feb 13 20:50:00.897063 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 20:50:00.897071 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:50:00.897079 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:50:00.897087 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:50:00.897095 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:50:00.897102 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:50:00.897110 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Feb 13 20:50:00.897118 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:50:00.897128 kernel: cpuidle: using governor menu Feb 13 20:50:00.897135 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:50:00.897143 kernel: ASID allocator initialised with 32768 entries Feb 13 20:50:00.897151 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:50:00.897159 kernel: Serial: AMBA PL011 UART driver Feb 13 20:50:00.897166 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:50:00.897174 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:50:00.897182 kernel: Modules: 509040 pages in range for PLT usage Feb 13 20:50:00.897190 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:50:00.897199 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:50:00.897207 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:50:00.897214 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:50:00.897222 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:50:00.897230 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:50:00.897238 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:50:00.897245 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:50:00.897253 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:50:00.897261 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:50:00.897270 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:50:00.897278 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:50:00.897286 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:50:00.897294 kernel: ACPI: Interpreter enabled Feb 13 20:50:00.897301 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:50:00.897309 kernel: ACPI: MCFG table detected, 1 entries Feb 13 20:50:00.897317 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:50:00.897325 kernel: printk: console [ttyAMA0] enabled Feb 13 20:50:00.897332 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:50:00.897464 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:50:00.897539 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 20:50:00.897629 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 20:50:00.897695 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 20:50:00.897760 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 20:50:00.897770 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 20:50:00.897779 kernel: PCI host bridge to bus 0000:00 Feb 13 20:50:00.897855 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 20:50:00.897918 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 20:50:00.897977 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 20:50:00.898046 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:50:00.898134 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 20:50:00.898213 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:50:00.898288 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 20:50:00.898357 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 20:50:00.898425 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:50:00.898493 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:50:00.898638 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 20:50:00.898711 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 20:50:00.898772 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 20:50:00.898835 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 20:50:00.898893 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 20:50:00.898904 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 20:50:00.898912 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 20:50:00.898921 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 20:50:00.898929 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 20:50:00.898937 kernel: iommu: Default domain type: Translated Feb 13 20:50:00.898944 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:50:00.898954 kernel: efivars: Registered efivars operations Feb 13 20:50:00.898962 kernel: vgaarb: loaded Feb 13 20:50:00.898969 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:50:00.898977 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:50:00.898985 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:50:00.898993 kernel: pnp: PnP ACPI init Feb 13 20:50:00.899079 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 20:50:00.899091 kernel: pnp: PnP ACPI: found 1 devices Feb 13 20:50:00.899099 kernel: NET: Registered PF_INET protocol family Feb 13 20:50:00.899109 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:50:00.899117 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:50:00.899125 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:50:00.899133 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:50:00.899141 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:50:00.899149 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:50:00.899157 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:50:00.899165 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:50:00.899173 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:50:00.899182 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:50:00.899190 kernel: kvm [1]: HYP mode not available Feb 13 20:50:00.899198 kernel: Initialise system trusted keyrings Feb 13 20:50:00.899205 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:50:00.899213 kernel: Key type asymmetric registered Feb 13 20:50:00.899221 kernel: Asymmetric key parser 'x509' registered Feb 13 20:50:00.899228 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:50:00.899236 kernel: io scheduler mq-deadline registered Feb 13 20:50:00.899244 kernel: io scheduler kyber registered Feb 13 20:50:00.899253 kernel: io scheduler bfq registered Feb 13 20:50:00.899261 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 20:50:00.899269 kernel: ACPI: button: Power Button [PWRB] Feb 13 20:50:00.899277 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 20:50:00.899345 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 20:50:00.899355 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:50:00.899363 kernel: thunder_xcv, ver 1.0 Feb 13 20:50:00.899371 kernel: thunder_bgx, ver 1.0 Feb 13 20:50:00.899379 kernel: nicpf, ver 1.0 Feb 13 20:50:00.899389 kernel: nicvf, ver 1.0 Feb 13 20:50:00.899461 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:50:00.899525 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:50:00 UTC (1739479800) Feb 13 20:50:00.899535 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:50:00.899543 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 20:50:00.899562 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:50:00.899572 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:50:00.899580 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:50:00.899591 kernel: Segment Routing with IPv6 Feb 13 20:50:00.899599 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:50:00.899607 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:50:00.899615 kernel: Key type dns_resolver registered Feb 13 20:50:00.899622 kernel: registered taskstats version 1 Feb 13 20:50:00.899630 kernel: Loading compiled-in X.509 certificates Feb 13 20:50:00.899638 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 20:50:00.899646 kernel: Key type .fscrypt registered Feb 13 20:50:00.899653 kernel: Key type fscrypt-provisioning registered Feb 13 20:50:00.899663 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:50:00.899671 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:50:00.899678 kernel: ima: No architecture policies found Feb 13 20:50:00.899686 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:50:00.899694 kernel: clk: Disabling unused clocks Feb 13 20:50:00.899702 kernel: Freeing unused kernel memory: 39360K Feb 13 20:50:00.899710 kernel: Run /init as init process Feb 13 20:50:00.899718 kernel: with arguments: Feb 13 20:50:00.899725 kernel: /init Feb 13 20:50:00.899734 kernel: with environment: Feb 13 20:50:00.899742 kernel: HOME=/ Feb 13 20:50:00.899749 kernel: TERM=linux Feb 13 20:50:00.899758 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:50:00.899767 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:50:00.899777 systemd[1]: Detected virtualization kvm. Feb 13 20:50:00.899786 systemd[1]: Detected architecture arm64. Feb 13 20:50:00.899796 systemd[1]: Running in initrd. Feb 13 20:50:00.899804 systemd[1]: No hostname configured, using default hostname. Feb 13 20:50:00.899812 systemd[1]: Hostname set to . Feb 13 20:50:00.899821 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:50:00.899830 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:50:00.899838 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:50:00.899847 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:50:00.899856 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:50:00.899866 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:50:00.899875 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:50:00.899883 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:50:00.899893 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:50:00.899901 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:50:00.899910 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:50:00.899918 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:50:00.899928 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:50:00.899937 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:50:00.899945 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:50:00.899953 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:50:00.899962 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:50:00.899970 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:50:00.899979 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:50:00.899987 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:50:00.899996 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:50:00.900006 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:50:00.900015 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:50:00.900023 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:50:00.900032 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:50:00.900047 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:50:00.900056 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:50:00.900064 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:50:00.900073 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:50:00.900084 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:50:00.900093 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:50:00.900102 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:50:00.900110 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:50:00.900119 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:50:00.900128 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:50:00.900139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:50:00.900148 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:50:00.900176 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 20:50:00.900198 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:50:00.900207 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:50:00.900216 systemd-journald[237]: Journal started Feb 13 20:50:00.900235 systemd-journald[237]: Runtime Journal (/run/log/journal/9b425338c1a84211aa8f17fc9a4215fc) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:50:00.890212 systemd-modules-load[238]: Inserted module 'overlay' Feb 13 20:50:00.903583 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:50:00.903616 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:50:00.908698 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:50:00.910341 systemd-modules-load[238]: Inserted module 'br_netfilter' Feb 13 20:50:00.912451 kernel: Bridge firewalling registered Feb 13 20:50:00.912960 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:50:00.916971 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:50:00.918238 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:50:00.920443 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:50:00.932748 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:50:00.934324 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:50:00.943124 dracut-cmdline[272]: dracut-dracut-053 Feb 13 20:50:00.943179 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:50:00.945680 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:50:00.956736 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:50:00.983916 systemd-resolved[289]: Positive Trust Anchors: Feb 13 20:50:00.983933 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:50:00.983965 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:50:00.988767 systemd-resolved[289]: Defaulting to hostname 'linux'. Feb 13 20:50:00.989940 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:50:00.993145 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:50:01.016587 kernel: SCSI subsystem initialized Feb 13 20:50:01.021570 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:50:01.029583 kernel: iscsi: registered transport (tcp) Feb 13 20:50:01.042967 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:50:01.042984 kernel: QLogic iSCSI HBA Driver Feb 13 20:50:01.094113 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:50:01.101729 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:50:01.117888 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:50:01.117936 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:50:01.118922 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:50:01.170607 kernel: raid6: neonx8 gen() 15769 MB/s Feb 13 20:50:01.187579 kernel: raid6: neonx4 gen() 15644 MB/s Feb 13 20:50:01.204573 kernel: raid6: neonx2 gen() 13239 MB/s Feb 13 20:50:01.221573 kernel: raid6: neonx1 gen() 10469 MB/s Feb 13 20:50:01.238573 kernel: raid6: int64x8 gen() 6959 MB/s Feb 13 20:50:01.255573 kernel: raid6: int64x4 gen() 7309 MB/s Feb 13 20:50:01.272573 kernel: raid6: int64x2 gen() 6098 MB/s Feb 13 20:50:01.289686 kernel: raid6: int64x1 gen() 5020 MB/s Feb 13 20:50:01.289702 kernel: raid6: using algorithm neonx8 gen() 15769 MB/s Feb 13 20:50:01.307636 kernel: raid6: .... xor() 11901 MB/s, rmw enabled Feb 13 20:50:01.307651 kernel: raid6: using neon recovery algorithm Feb 13 20:50:01.312990 kernel: xor: measuring software checksum speed Feb 13 20:50:01.313006 kernel: 8regs : 19740 MB/sec Feb 13 20:50:01.313677 kernel: 32regs : 18752 MB/sec Feb 13 20:50:01.314871 kernel: arm64_neon : 26954 MB/sec Feb 13 20:50:01.314885 kernel: xor: using function: arm64_neon (26954 MB/sec) Feb 13 20:50:01.365585 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:50:01.377610 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:50:01.391722 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:50:01.403418 systemd-udevd[460]: Using default interface naming scheme 'v255'. Feb 13 20:50:01.406579 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:50:01.415711 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:50:01.426692 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Feb 13 20:50:01.452336 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:50:01.463704 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:50:01.502773 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:50:01.508839 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:50:01.523585 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:50:01.525318 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:50:01.527172 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:50:01.529143 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:50:01.537904 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:50:01.547838 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:50:01.553575 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 20:50:01.560887 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 20:50:01.561016 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:50:01.561028 kernel: GPT:9289727 != 19775487 Feb 13 20:50:01.561047 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:50:01.561058 kernel: GPT:9289727 != 19775487 Feb 13 20:50:01.561070 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:50:01.561080 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:50:01.559231 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:50:01.559343 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:50:01.561696 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:50:01.562620 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:50:01.562788 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:50:01.564556 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:50:01.577652 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (512) Feb 13 20:50:01.577446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:50:01.583528 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (520) Feb 13 20:50:01.590713 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:50:01.592588 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:50:01.602790 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:50:01.606500 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:50:01.607706 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:50:01.612953 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:50:01.626702 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:50:01.628322 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:50:01.632318 disk-uuid[552]: Primary Header is updated. Feb 13 20:50:01.632318 disk-uuid[552]: Secondary Entries is updated. Feb 13 20:50:01.632318 disk-uuid[552]: Secondary Header is updated. Feb 13 20:50:01.635569 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:50:01.651105 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:50:02.651501 disk-uuid[553]: The operation has completed successfully. Feb 13 20:50:02.653009 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:50:02.675150 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:50:02.675244 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:50:02.695731 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:50:02.698613 sh[575]: Success Feb 13 20:50:02.713569 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:50:02.755040 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:50:02.756815 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:50:02.757686 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:50:02.769438 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 20:50:02.769480 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:50:02.770619 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:50:02.772097 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:50:02.772111 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:50:02.775628 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:50:02.776877 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:50:02.791737 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:50:02.793381 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:50:02.802094 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:50:02.802143 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:50:02.802155 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:50:02.805575 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:50:02.816082 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:50:02.818168 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:50:02.823153 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:50:02.832748 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:50:02.893622 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:50:02.902925 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:50:02.926415 systemd-networkd[761]: lo: Link UP Feb 13 20:50:02.926429 systemd-networkd[761]: lo: Gained carrier Feb 13 20:50:02.927131 systemd-networkd[761]: Enumeration completed Feb 13 20:50:02.927657 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:50:02.927661 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:50:02.927874 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:50:02.928438 systemd-networkd[761]: eth0: Link UP Feb 13 20:50:02.928441 systemd-networkd[761]: eth0: Gained carrier Feb 13 20:50:02.928447 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:50:02.929384 systemd[1]: Reached target network.target - Network. Feb 13 20:50:02.946599 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.9/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:50:02.947792 ignition[673]: Ignition 2.19.0 Feb 13 20:50:02.947798 ignition[673]: Stage: fetch-offline Feb 13 20:50:02.947833 ignition[673]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:02.947841 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:02.950218 ignition[673]: parsed url from cmdline: "" Feb 13 20:50:02.950222 ignition[673]: no config URL provided Feb 13 20:50:02.950229 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:50:02.950239 ignition[673]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:50:02.950264 ignition[673]: op(1): [started] loading QEMU firmware config module Feb 13 20:50:02.950269 ignition[673]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 20:50:02.956036 ignition[673]: op(1): [finished] loading QEMU firmware config module Feb 13 20:50:02.956096 ignition[673]: QEMU firmware config was not found. Ignoring... Feb 13 20:50:02.976959 ignition[673]: parsing config with SHA512: 1cb4cff45d1e88215d0e7ecc3524536bd95291691d53319536e9d564436471fc7c02ebd94e71586b90daf7b402d867666480af53cd94fb66eeb105ebdfe8c3b8 Feb 13 20:50:02.981051 unknown[673]: fetched base config from "system" Feb 13 20:50:02.981062 unknown[673]: fetched user config from "qemu" Feb 13 20:50:02.981480 ignition[673]: fetch-offline: fetch-offline passed Feb 13 20:50:02.981755 systemd-resolved[289]: Detected conflict on linux IN A 10.0.0.9 Feb 13 20:50:02.981546 ignition[673]: Ignition finished successfully Feb 13 20:50:02.981763 systemd-resolved[289]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Feb 13 20:50:02.983508 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:50:02.984825 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:50:02.991690 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:50:03.003068 ignition[772]: Ignition 2.19.0 Feb 13 20:50:03.003079 ignition[772]: Stage: kargs Feb 13 20:50:03.003252 ignition[772]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:03.003261 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:03.005893 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:50:03.004116 ignition[772]: kargs: kargs passed Feb 13 20:50:03.004161 ignition[772]: Ignition finished successfully Feb 13 20:50:03.013748 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:50:03.023271 ignition[780]: Ignition 2.19.0 Feb 13 20:50:03.023281 ignition[780]: Stage: disks Feb 13 20:50:03.023433 ignition[780]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:03.023442 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:03.024418 ignition[780]: disks: disks passed Feb 13 20:50:03.024460 ignition[780]: Ignition finished successfully Feb 13 20:50:03.027583 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:50:03.028978 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:50:03.030474 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:50:03.032441 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:50:03.034315 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:50:03.036256 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:50:03.050730 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:50:03.058441 systemd-resolved[289]: Detected conflict on linux5 IN A 10.0.0.9 Feb 13 20:50:03.058456 systemd-resolved[289]: Hostname conflict, changing published hostname from 'linux5' to 'linux13'. Feb 13 20:50:03.061102 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:50:03.064517 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:50:03.067490 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:50:03.112576 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 20:50:03.112838 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:50:03.114074 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:50:03.125648 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:50:03.127332 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:50:03.128481 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:50:03.128574 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:50:03.135901 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) Feb 13 20:50:03.128598 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:50:03.140454 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:50:03.140474 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:50:03.140484 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:50:03.132625 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:50:03.134788 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:50:03.143605 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:50:03.145188 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:50:03.183065 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:50:03.187516 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:50:03.191358 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:50:03.194208 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:50:03.265767 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:50:03.273691 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:50:03.276124 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:50:03.280576 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:50:03.296712 ignition[911]: INFO : Ignition 2.19.0 Feb 13 20:50:03.296712 ignition[911]: INFO : Stage: mount Feb 13 20:50:03.299043 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:03.299043 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:03.299043 ignition[911]: INFO : mount: mount passed Feb 13 20:50:03.299043 ignition[911]: INFO : Ignition finished successfully Feb 13 20:50:03.297091 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:50:03.299619 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:50:03.311651 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:50:03.768328 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:50:03.782771 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:50:03.788571 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (924) Feb 13 20:50:03.790677 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:50:03.790701 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:50:03.790712 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:50:03.793573 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:50:03.794648 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:50:03.816376 ignition[941]: INFO : Ignition 2.19.0 Feb 13 20:50:03.816376 ignition[941]: INFO : Stage: files Feb 13 20:50:03.817873 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:03.817873 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:03.817873 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:50:03.821186 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:50:03.821186 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:50:03.821186 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:50:03.821186 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:50:03.821186 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:50:03.820402 unknown[941]: wrote ssh authorized keys file for user: core Feb 13 20:50:03.828104 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 20:50:03.828104 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 13 20:50:03.895312 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:50:04.271972 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 20:50:04.273994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:50:04.273994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:50:04.273994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:50:04.273994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:50:04.273994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:50:04.273994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:50:04.273994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:50:04.273994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:50:04.273994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:50:04.273994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:50:04.273994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:50:04.273994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:50:04.273994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:50:04.273994 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 20:50:04.621601 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:50:04.943605 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 20:50:04.943605 ignition[941]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:50:04.947980 ignition[941]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:50:04.947980 ignition[941]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:50:04.947980 ignition[941]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:50:04.947980 ignition[941]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 20:50:04.947980 ignition[941]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:50:04.947980 ignition[941]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:50:04.947980 ignition[941]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 20:50:04.947980 ignition[941]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 20:50:04.951448 systemd-networkd[761]: eth0: Gained IPv6LL Feb 13 20:50:04.971052 ignition[941]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:50:04.974753 ignition[941]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:50:04.976270 ignition[941]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 20:50:04.976270 ignition[941]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:50:04.976270 ignition[941]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:50:04.976270 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:50:04.976270 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:50:04.976270 ignition[941]: INFO : files: files passed Feb 13 20:50:04.976270 ignition[941]: INFO : Ignition finished successfully Feb 13 20:50:04.977883 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:50:04.994703 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:50:04.997860 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:50:05.001234 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:50:05.001369 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:50:05.004944 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 20:50:05.007686 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:50:05.007686 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:50:05.010764 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:50:05.012285 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:50:05.013767 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:50:05.031729 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:50:05.052283 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:50:05.052395 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:50:05.054622 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:50:05.056543 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:50:05.058369 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:50:05.059199 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:50:05.076585 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:50:05.085730 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:50:05.094895 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:50:05.096169 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:50:05.098176 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:50:05.099937 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:50:05.100073 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:50:05.102565 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:50:05.104569 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:50:05.106242 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:50:05.107908 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:50:05.109758 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:50:05.111634 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:50:05.113465 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:50:05.115420 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:50:05.117464 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:50:05.119262 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:50:05.120813 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:50:05.120949 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:50:05.123296 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:50:05.125253 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:50:05.127193 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:50:05.127300 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:50:05.129319 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:50:05.129443 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:50:05.132312 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:50:05.132437 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:50:05.134436 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:50:05.135915 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:50:05.139604 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:50:05.140865 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:50:05.143017 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:50:05.144546 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:50:05.144658 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:50:05.146251 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:50:05.146343 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:50:05.147901 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:50:05.148020 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:50:05.149828 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:50:05.149931 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:50:05.163735 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:50:05.165359 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:50:05.166227 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:50:05.166353 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:50:05.168179 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:50:05.168281 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:50:05.173475 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:50:05.173587 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:50:05.176600 ignition[996]: INFO : Ignition 2.19.0 Feb 13 20:50:05.176600 ignition[996]: INFO : Stage: umount Feb 13 20:50:05.176600 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:50:05.176600 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:50:05.180361 ignition[996]: INFO : umount: umount passed Feb 13 20:50:05.180361 ignition[996]: INFO : Ignition finished successfully Feb 13 20:50:05.182528 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:50:05.183035 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:50:05.183118 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:50:05.184958 systemd[1]: Stopped target network.target - Network. Feb 13 20:50:05.186006 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:50:05.186083 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:50:05.187721 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:50:05.187768 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:50:05.189267 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:50:05.189314 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:50:05.190977 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:50:05.191035 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:50:05.192943 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:50:05.195806 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:50:05.199339 systemd-networkd[761]: eth0: DHCPv6 lease lost Feb 13 20:50:05.201599 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:50:05.201739 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:50:05.203787 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:50:05.203901 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:50:05.207017 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:50:05.207058 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:50:05.218682 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:50:05.219529 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:50:05.219622 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:50:05.221624 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:50:05.221669 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:50:05.223329 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:50:05.223373 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:50:05.225356 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:50:05.225402 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:50:05.227285 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:50:05.235706 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:50:05.236705 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:50:05.239210 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:50:05.239349 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:50:05.241562 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:50:05.241604 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:50:05.243345 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:50:05.243378 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:50:05.245087 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:50:05.245133 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:50:05.247673 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:50:05.247720 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:50:05.250421 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:50:05.250471 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:50:05.265786 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:50:05.266821 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:50:05.266939 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:50:05.270200 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:50:05.270251 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:50:05.272436 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:50:05.272479 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:50:05.274609 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:50:05.274652 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:50:05.276936 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:50:05.277037 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:50:05.278702 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:50:05.278779 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:50:05.281270 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:50:05.283284 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:50:05.283351 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:50:05.285902 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:50:05.296489 systemd[1]: Switching root. Feb 13 20:50:05.318620 systemd-journald[237]: Journal stopped Feb 13 20:50:05.995259 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 20:50:05.995316 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:50:05.995329 kernel: SELinux: policy capability open_perms=1 Feb 13 20:50:05.995339 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:50:05.995349 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:50:05.995359 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:50:05.995374 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:50:05.995384 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:50:05.995393 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:50:05.995406 kernel: audit: type=1403 audit(1739479805.456:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:50:05.995417 systemd[1]: Successfully loaded SELinux policy in 32.053ms. Feb 13 20:50:05.995433 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.378ms. Feb 13 20:50:05.995445 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:50:05.995456 systemd[1]: Detected virtualization kvm. Feb 13 20:50:05.995466 systemd[1]: Detected architecture arm64. Feb 13 20:50:05.995478 systemd[1]: Detected first boot. Feb 13 20:50:05.995488 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:50:05.995499 zram_generator::config[1041]: No configuration found. Feb 13 20:50:05.995514 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:50:05.995524 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:50:05.995535 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:50:05.995545 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:50:05.995721 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:50:05.995740 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:50:05.995751 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:50:05.995761 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:50:05.995772 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:50:05.995782 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:50:05.995792 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:50:05.995803 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:50:05.995813 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:50:05.995824 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:50:05.995837 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:50:05.995847 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:50:05.995858 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:50:05.995869 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:50:05.995880 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 20:50:05.995890 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:50:05.995901 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:50:05.995911 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:50:05.995926 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:50:05.995939 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:50:05.995949 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:50:05.995960 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:50:05.995970 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:50:05.995981 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:50:05.995991 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:50:05.996009 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:50:05.996023 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:50:05.996034 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:50:05.996045 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:50:05.996055 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:50:05.996065 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:50:05.996076 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:50:05.996086 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:50:05.996096 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:50:05.996106 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:50:05.996118 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:50:05.996129 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:50:05.996140 systemd[1]: Reached target machines.target - Containers. Feb 13 20:50:05.996151 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:50:05.996161 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:50:05.996172 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:50:05.996182 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:50:05.996193 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:50:05.996203 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:50:05.996216 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:50:05.996227 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:50:05.996237 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:50:05.996248 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:50:05.996258 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:50:05.996269 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:50:05.996278 kernel: fuse: init (API version 7.39) Feb 13 20:50:05.996289 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:50:05.996300 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:50:05.996311 kernel: loop: module loaded Feb 13 20:50:05.996321 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:50:05.996331 kernel: ACPI: bus type drm_connector registered Feb 13 20:50:05.996341 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:50:05.996352 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:50:05.996362 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:50:05.996373 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:50:05.996386 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:50:05.996418 systemd-journald[1108]: Collecting audit messages is disabled. Feb 13 20:50:05.996441 systemd[1]: Stopped verity-setup.service. Feb 13 20:50:05.996453 systemd-journald[1108]: Journal started Feb 13 20:50:05.996473 systemd-journald[1108]: Runtime Journal (/run/log/journal/9b425338c1a84211aa8f17fc9a4215fc) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:50:05.805357 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:50:05.818393 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:50:05.818801 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:50:05.998681 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:50:05.999316 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:50:06.000447 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:50:06.001631 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:50:06.002668 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:50:06.003770 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:50:06.004894 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:50:06.007610 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:50:06.008881 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:50:06.010279 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:50:06.010422 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:50:06.011812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:50:06.011940 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:50:06.013203 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:50:06.013343 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:50:06.014794 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:50:06.014935 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:50:06.016298 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:50:06.016423 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:50:06.017791 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:50:06.017927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:50:06.019372 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:50:06.020707 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:50:06.022135 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:50:06.034150 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:50:06.043685 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:50:06.045875 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:50:06.046901 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:50:06.046941 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:50:06.048810 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:50:06.050877 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:50:06.052879 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:50:06.053910 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:50:06.055426 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:50:06.057284 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:50:06.058380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:50:06.061713 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:50:06.062840 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:50:06.064786 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:50:06.064933 systemd-journald[1108]: Time spent on flushing to /var/log/journal/9b425338c1a84211aa8f17fc9a4215fc is 16.447ms for 859 entries. Feb 13 20:50:06.064933 systemd-journald[1108]: System Journal (/var/log/journal/9b425338c1a84211aa8f17fc9a4215fc) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:50:06.087969 systemd-journald[1108]: Received client request to flush runtime journal. Feb 13 20:50:06.068707 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:50:06.074063 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:50:06.076665 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:50:06.079843 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:50:06.083801 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:50:06.089797 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:50:06.091340 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:50:06.093718 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:50:06.095286 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:50:06.095563 kernel: loop0: detected capacity change from 0 to 201592 Feb 13 20:50:06.099972 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:50:06.107910 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Feb 13 20:50:06.108084 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Feb 13 20:50:06.111761 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:50:06.117643 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:50:06.115669 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:50:06.117090 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:50:06.122846 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:50:06.128854 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:50:06.137702 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:50:06.138362 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:50:06.150884 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:50:06.153394 kernel: loop1: detected capacity change from 0 to 114432 Feb 13 20:50:06.157739 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:50:06.172852 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 20:50:06.172874 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 20:50:06.176671 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:50:06.192581 kernel: loop2: detected capacity change from 0 to 114328 Feb 13 20:50:06.233111 kernel: loop3: detected capacity change from 0 to 201592 Feb 13 20:50:06.239776 kernel: loop4: detected capacity change from 0 to 114432 Feb 13 20:50:06.243614 kernel: loop5: detected capacity change from 0 to 114328 Feb 13 20:50:06.246694 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 20:50:06.247425 (sd-merge)[1181]: Merged extensions into '/usr'. Feb 13 20:50:06.251624 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:50:06.251639 systemd[1]: Reloading... Feb 13 20:50:06.307627 zram_generator::config[1207]: No configuration found. Feb 13 20:50:06.359585 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:50:06.405677 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:50:06.441381 systemd[1]: Reloading finished in 189 ms. Feb 13 20:50:06.466654 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:50:06.468026 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:50:06.485796 systemd[1]: Starting ensure-sysext.service... Feb 13 20:50:06.487728 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:50:06.498930 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:50:06.498948 systemd[1]: Reloading... Feb 13 20:50:06.513518 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:50:06.514184 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:50:06.514986 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:50:06.515334 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Feb 13 20:50:06.515462 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Feb 13 20:50:06.517825 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:50:06.517931 systemd-tmpfiles[1242]: Skipping /boot Feb 13 20:50:06.525382 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:50:06.525494 systemd-tmpfiles[1242]: Skipping /boot Feb 13 20:50:06.554580 zram_generator::config[1269]: No configuration found. Feb 13 20:50:06.634812 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:50:06.670396 systemd[1]: Reloading finished in 171 ms. Feb 13 20:50:06.686372 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:50:06.699044 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:50:06.706619 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:50:06.709029 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:50:06.711460 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:50:06.715827 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:50:06.720302 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:50:06.726685 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:50:06.730186 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:50:06.733905 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:50:06.737583 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:50:06.741444 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:50:06.743645 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:50:06.751847 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:50:06.754674 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:50:06.756571 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:50:06.756713 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:50:06.758484 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:50:06.758622 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:50:06.760480 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:50:06.760641 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:50:06.761128 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Feb 13 20:50:06.769632 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:50:06.769841 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:50:06.779167 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:50:06.784758 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:50:06.787931 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:50:06.789604 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:50:06.791236 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:50:06.807125 systemd[1]: Finished ensure-sysext.service. Feb 13 20:50:06.808195 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:50:06.810170 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:50:06.821114 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:50:06.824747 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:50:06.828312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:50:06.833723 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:50:06.834748 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:50:06.838956 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:50:06.840821 augenrules[1363]: No rules Feb 13 20:50:06.843002 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:50:06.845619 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:50:06.846198 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:50:06.849610 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:50:06.849762 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:50:06.851908 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:50:06.852054 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:50:06.854467 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:50:06.854640 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:50:06.866764 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 20:50:06.868072 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:50:06.872529 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:50:06.872688 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:50:06.876588 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1344) Feb 13 20:50:06.877261 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:50:06.907153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:50:06.918804 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:50:06.933846 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:50:06.935296 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:50:06.949487 systemd-resolved[1309]: Positive Trust Anchors: Feb 13 20:50:06.949818 systemd-networkd[1370]: lo: Link UP Feb 13 20:50:06.949822 systemd-networkd[1370]: lo: Gained carrier Feb 13 20:50:06.950309 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:50:06.950407 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:50:06.950587 systemd-networkd[1370]: Enumeration completed Feb 13 20:50:06.950688 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:50:06.951307 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:50:06.951310 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:50:06.951975 systemd-networkd[1370]: eth0: Link UP Feb 13 20:50:06.951979 systemd-networkd[1370]: eth0: Gained carrier Feb 13 20:50:06.952003 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:50:06.956762 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:50:06.958296 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:50:06.962617 systemd-resolved[1309]: Defaulting to hostname 'linux'. Feb 13 20:50:06.967623 systemd-networkd[1370]: eth0: DHCPv4 address 10.0.0.9/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:50:06.970370 systemd-timesyncd[1375]: Network configuration changed, trying to establish connection. Feb 13 20:50:07.382491 systemd-timesyncd[1375]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 20:50:07.382611 systemd-timesyncd[1375]: Initial clock synchronization to Thu 2025-02-13 20:50:07.382280 UTC. Feb 13 20:50:07.383122 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:50:07.384337 systemd[1]: Reached target network.target - Network. Feb 13 20:50:07.385206 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:50:07.385591 systemd-resolved[1309]: Clock change detected. Flushing caches. Feb 13 20:50:07.398352 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:50:07.404118 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:50:07.408920 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:50:07.436001 lvm[1397]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:50:07.462133 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:50:07.483442 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:50:07.484805 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:50:07.487212 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:50:07.488268 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:50:07.489408 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:50:07.490736 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:50:07.491825 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:50:07.493124 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:50:07.494259 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:50:07.494302 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:50:07.495121 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:50:07.496693 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:50:07.498952 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:50:07.510993 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:50:07.513146 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:50:07.514772 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:50:07.515892 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:50:07.516792 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:50:07.517729 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:50:07.517762 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:50:07.518680 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:50:07.520370 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:50:07.521286 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:50:07.523777 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:50:07.527406 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:50:07.528384 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:50:07.529618 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:50:07.534214 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:50:07.536669 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:50:07.542254 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:50:07.544139 jq[1408]: false Feb 13 20:50:07.546773 extend-filesystems[1409]: Found loop3 Feb 13 20:50:07.546773 extend-filesystems[1409]: Found loop4 Feb 13 20:50:07.555728 extend-filesystems[1409]: Found loop5 Feb 13 20:50:07.555728 extend-filesystems[1409]: Found vda Feb 13 20:50:07.555728 extend-filesystems[1409]: Found vda1 Feb 13 20:50:07.555728 extend-filesystems[1409]: Found vda2 Feb 13 20:50:07.555728 extend-filesystems[1409]: Found vda3 Feb 13 20:50:07.555728 extend-filesystems[1409]: Found usr Feb 13 20:50:07.555728 extend-filesystems[1409]: Found vda4 Feb 13 20:50:07.555728 extend-filesystems[1409]: Found vda6 Feb 13 20:50:07.555728 extend-filesystems[1409]: Found vda7 Feb 13 20:50:07.555728 extend-filesystems[1409]: Found vda9 Feb 13 20:50:07.555728 extend-filesystems[1409]: Checking size of /dev/vda9 Feb 13 20:50:07.555435 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:50:07.562295 dbus-daemon[1407]: [system] SELinux support is enabled Feb 13 20:50:07.563356 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:50:07.563840 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:50:07.567273 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:50:07.569926 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:50:07.571432 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:50:07.576121 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:50:07.579875 extend-filesystems[1409]: Resized partition /dev/vda9 Feb 13 20:50:07.586827 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1340) Feb 13 20:50:07.587477 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:50:07.587658 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:50:07.587915 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:50:07.588048 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:50:07.594167 jq[1429]: true Feb 13 20:50:07.591776 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:50:07.591928 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:50:07.609583 extend-filesystems[1431]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:50:07.621701 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 20:50:07.621730 tar[1432]: linux-arm64/LICENSE Feb 13 20:50:07.621730 tar[1432]: linux-arm64/helm Feb 13 20:50:07.610354 (ntainerd)[1434]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:50:07.622153 jq[1433]: true Feb 13 20:50:07.618796 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:50:07.618820 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:50:07.620115 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:50:07.620134 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:50:07.626354 systemd-logind[1418]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 20:50:07.626634 systemd-logind[1418]: New seat seat0. Feb 13 20:50:07.639901 update_engine[1426]: I20250213 20:50:07.639693 1426 main.cc:92] Flatcar Update Engine starting Feb 13 20:50:07.652172 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 20:50:07.652202 extend-filesystems[1431]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:50:07.652202 extend-filesystems[1431]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:50:07.652202 extend-filesystems[1431]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 20:50:07.646205 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:50:07.665581 update_engine[1426]: I20250213 20:50:07.646620 1426 update_check_scheduler.cc:74] Next update check in 5m33s Feb 13 20:50:07.665606 extend-filesystems[1409]: Resized filesystem in /dev/vda9 Feb 13 20:50:07.653361 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:50:07.653581 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:50:07.661447 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:50:07.681548 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:50:07.685065 bash[1461]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:50:07.687484 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:50:07.689391 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:50:07.762034 locksmithd[1463]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:50:07.851808 sshd_keygen[1425]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:50:07.870141 containerd[1434]: time="2025-02-13T20:50:07.869214055Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:50:07.880148 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:50:07.888342 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:50:07.894134 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:50:07.894343 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:50:07.900252 containerd[1434]: time="2025-02-13T20:50:07.900208175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:07.901553 containerd[1434]: time="2025-02-13T20:50:07.901504055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:50:07.902006 containerd[1434]: time="2025-02-13T20:50:07.901671255Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:50:07.902006 containerd[1434]: time="2025-02-13T20:50:07.901696295Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:50:07.902006 containerd[1434]: time="2025-02-13T20:50:07.901861735Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:50:07.902006 containerd[1434]: time="2025-02-13T20:50:07.901880295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:07.902006 containerd[1434]: time="2025-02-13T20:50:07.901933535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:50:07.902006 containerd[1434]: time="2025-02-13T20:50:07.901946095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:07.902467 containerd[1434]: time="2025-02-13T20:50:07.902444055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:50:07.902620 containerd[1434]: time="2025-02-13T20:50:07.902599415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:07.902700 containerd[1434]: time="2025-02-13T20:50:07.902682895Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:50:07.902808 containerd[1434]: time="2025-02-13T20:50:07.902789735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:07.903021 containerd[1434]: time="2025-02-13T20:50:07.903001295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:07.903468 containerd[1434]: time="2025-02-13T20:50:07.903363455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:50:07.903718 containerd[1434]: time="2025-02-13T20:50:07.903694095Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:50:07.904078 containerd[1434]: time="2025-02-13T20:50:07.903834815Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:50:07.904078 containerd[1434]: time="2025-02-13T20:50:07.903944575Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:50:07.904078 containerd[1434]: time="2025-02-13T20:50:07.903986575Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:50:07.906571 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:50:07.909366 containerd[1434]: time="2025-02-13T20:50:07.909337335Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:50:07.909501 containerd[1434]: time="2025-02-13T20:50:07.909483935Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:50:07.910132 containerd[1434]: time="2025-02-13T20:50:07.909633775Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:50:07.910132 containerd[1434]: time="2025-02-13T20:50:07.909659215Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:50:07.910132 containerd[1434]: time="2025-02-13T20:50:07.909674775Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:50:07.910132 containerd[1434]: time="2025-02-13T20:50:07.909817815Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:50:07.910132 containerd[1434]: time="2025-02-13T20:50:07.910046975Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:50:07.910593 containerd[1434]: time="2025-02-13T20:50:07.910570175Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:50:07.910762 containerd[1434]: time="2025-02-13T20:50:07.910703255Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:50:07.910829 containerd[1434]: time="2025-02-13T20:50:07.910814215Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:50:07.910961 containerd[1434]: time="2025-02-13T20:50:07.910943255Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:50:07.911105 containerd[1434]: time="2025-02-13T20:50:07.911016095Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:50:07.911233 containerd[1434]: time="2025-02-13T20:50:07.911035495Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:50:07.911821 containerd[1434]: time="2025-02-13T20:50:07.911283735Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:50:07.911821 containerd[1434]: time="2025-02-13T20:50:07.911308775Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:50:07.911821 containerd[1434]: time="2025-02-13T20:50:07.911322655Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:50:07.911821 containerd[1434]: time="2025-02-13T20:50:07.911335215Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:50:07.911821 containerd[1434]: time="2025-02-13T20:50:07.911348175Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:50:07.911821 containerd[1434]: time="2025-02-13T20:50:07.911376055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:50:07.911821 containerd[1434]: time="2025-02-13T20:50:07.911392375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:50:07.911821 containerd[1434]: time="2025-02-13T20:50:07.911405215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:50:07.911821 containerd[1434]: time="2025-02-13T20:50:07.911418335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:50:07.911821 containerd[1434]: time="2025-02-13T20:50:07.911438695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:50:07.911821 containerd[1434]: time="2025-02-13T20:50:07.911453495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:50:07.911821 containerd[1434]: time="2025-02-13T20:50:07.911466455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:50:07.911821 containerd[1434]: time="2025-02-13T20:50:07.911479535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:50:07.911821 containerd[1434]: time="2025-02-13T20:50:07.911493295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:50:07.912145 containerd[1434]: time="2025-02-13T20:50:07.911507735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:50:07.912145 containerd[1434]: time="2025-02-13T20:50:07.911530615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:50:07.912145 containerd[1434]: time="2025-02-13T20:50:07.911548335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:50:07.912145 containerd[1434]: time="2025-02-13T20:50:07.911561015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:50:07.912145 containerd[1434]: time="2025-02-13T20:50:07.911584255Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:50:07.912145 containerd[1434]: time="2025-02-13T20:50:07.911607695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:50:07.912145 containerd[1434]: time="2025-02-13T20:50:07.911620415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:50:07.912145 containerd[1434]: time="2025-02-13T20:50:07.911634055Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:50:07.912145 containerd[1434]: time="2025-02-13T20:50:07.911809775Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:50:07.912145 containerd[1434]: time="2025-02-13T20:50:07.911828535Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:50:07.912145 containerd[1434]: time="2025-02-13T20:50:07.911839655Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:50:07.912145 containerd[1434]: time="2025-02-13T20:50:07.911853135Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:50:07.912145 containerd[1434]: time="2025-02-13T20:50:07.911862855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:50:07.912365 containerd[1434]: time="2025-02-13T20:50:07.911875975Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:50:07.912365 containerd[1434]: time="2025-02-13T20:50:07.911886535Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:50:07.912365 containerd[1434]: time="2025-02-13T20:50:07.911896655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:50:07.912455 containerd[1434]: time="2025-02-13T20:50:07.912301655Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:50:07.912455 containerd[1434]: time="2025-02-13T20:50:07.912361935Z" level=info msg="Connect containerd service" Feb 13 20:50:07.912588 containerd[1434]: time="2025-02-13T20:50:07.912512095Z" level=info msg="using legacy CRI server" Feb 13 20:50:07.912588 containerd[1434]: time="2025-02-13T20:50:07.912529815Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:50:07.912710 containerd[1434]: time="2025-02-13T20:50:07.912618455Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:50:07.913351 containerd[1434]: time="2025-02-13T20:50:07.913319855Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:50:07.913639 containerd[1434]: time="2025-02-13T20:50:07.913596975Z" level=info msg="Start subscribing containerd event" Feb 13 20:50:07.913875 containerd[1434]: time="2025-02-13T20:50:07.913764255Z" level=info msg="Start recovering state" Feb 13 20:50:07.913875 containerd[1434]: time="2025-02-13T20:50:07.913803455Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:50:07.913875 containerd[1434]: time="2025-02-13T20:50:07.913843455Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:50:07.914064 containerd[1434]: time="2025-02-13T20:50:07.913978455Z" level=info msg="Start event monitor" Feb 13 20:50:07.914064 containerd[1434]: time="2025-02-13T20:50:07.913999615Z" level=info msg="Start snapshots syncer" Feb 13 20:50:07.914442 containerd[1434]: time="2025-02-13T20:50:07.914009815Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:50:07.914442 containerd[1434]: time="2025-02-13T20:50:07.914159815Z" level=info msg="Start streaming server" Feb 13 20:50:07.914442 containerd[1434]: time="2025-02-13T20:50:07.914299255Z" level=info msg="containerd successfully booted in 0.046836s" Feb 13 20:50:07.914702 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:50:07.922163 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:50:07.934395 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:50:07.936646 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 20:50:07.938002 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:50:08.039068 tar[1432]: linux-arm64/README.md Feb 13 20:50:08.051639 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:50:08.623237 systemd-networkd[1370]: eth0: Gained IPv6LL Feb 13 20:50:08.625807 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:50:08.627553 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:50:08.638300 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 20:50:08.640595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:08.642583 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:50:08.656219 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 20:50:08.656469 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 20:50:08.658547 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:50:08.665272 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:50:09.157813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:09.159375 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:50:09.161582 (kubelet)[1520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:50:09.161750 systemd[1]: Startup finished in 558ms (kernel) + 4.746s (initrd) + 3.329s (userspace) = 8.633s. Feb 13 20:50:09.556317 kubelet[1520]: E0213 20:50:09.556205 1520 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:50:09.558613 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:50:09.558756 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:50:13.534878 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:50:13.536005 systemd[1]: Started sshd@0-10.0.0.9:22-10.0.0.1:39646.service - OpenSSH per-connection server daemon (10.0.0.1:39646). Feb 13 20:50:13.594223 sshd[1533]: Accepted publickey for core from 10.0.0.1 port 39646 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:50:13.595887 sshd[1533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:13.604253 systemd-logind[1418]: New session 1 of user core. Feb 13 20:50:13.605240 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:50:13.628336 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:50:13.638134 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:50:13.640347 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:50:13.646683 (systemd)[1537]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:50:13.717544 systemd[1537]: Queued start job for default target default.target. Feb 13 20:50:13.726114 systemd[1537]: Created slice app.slice - User Application Slice. Feb 13 20:50:13.726158 systemd[1537]: Reached target paths.target - Paths. Feb 13 20:50:13.726170 systemd[1537]: Reached target timers.target - Timers. Feb 13 20:50:13.727437 systemd[1537]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:50:13.737777 systemd[1537]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:50:13.737843 systemd[1537]: Reached target sockets.target - Sockets. Feb 13 20:50:13.737855 systemd[1537]: Reached target basic.target - Basic System. Feb 13 20:50:13.737890 systemd[1537]: Reached target default.target - Main User Target. Feb 13 20:50:13.737916 systemd[1537]: Startup finished in 86ms. Feb 13 20:50:13.738225 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:50:13.739645 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:50:13.806353 systemd[1]: Started sshd@1-10.0.0.9:22-10.0.0.1:39662.service - OpenSSH per-connection server daemon (10.0.0.1:39662). Feb 13 20:50:13.843567 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 39662 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:50:13.844948 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:13.849615 systemd-logind[1418]: New session 2 of user core. Feb 13 20:50:13.858239 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:50:13.909844 sshd[1548]: pam_unix(sshd:session): session closed for user core Feb 13 20:50:13.926432 systemd[1]: sshd@1-10.0.0.9:22-10.0.0.1:39662.service: Deactivated successfully. Feb 13 20:50:13.927865 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:50:13.929231 systemd-logind[1418]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:50:13.930519 systemd[1]: Started sshd@2-10.0.0.9:22-10.0.0.1:39672.service - OpenSSH per-connection server daemon (10.0.0.1:39672). Feb 13 20:50:13.931475 systemd-logind[1418]: Removed session 2. Feb 13 20:50:13.967930 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 39672 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:50:13.969212 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:13.972926 systemd-logind[1418]: New session 3 of user core. Feb 13 20:50:13.987243 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:50:14.035467 sshd[1555]: pam_unix(sshd:session): session closed for user core Feb 13 20:50:14.044581 systemd[1]: sshd@2-10.0.0.9:22-10.0.0.1:39672.service: Deactivated successfully. Feb 13 20:50:14.045993 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:50:14.048500 systemd-logind[1418]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:50:14.049738 systemd[1]: Started sshd@3-10.0.0.9:22-10.0.0.1:39678.service - OpenSSH per-connection server daemon (10.0.0.1:39678). Feb 13 20:50:14.050509 systemd-logind[1418]: Removed session 3. Feb 13 20:50:14.087535 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 39678 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:50:14.088765 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:14.092795 systemd-logind[1418]: New session 4 of user core. Feb 13 20:50:14.104249 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:50:14.159262 sshd[1562]: pam_unix(sshd:session): session closed for user core Feb 13 20:50:14.173360 systemd[1]: sshd@3-10.0.0.9:22-10.0.0.1:39678.service: Deactivated successfully. Feb 13 20:50:14.174727 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:50:14.176742 systemd-logind[1418]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:50:14.177895 systemd[1]: Started sshd@4-10.0.0.9:22-10.0.0.1:39688.service - OpenSSH per-connection server daemon (10.0.0.1:39688). Feb 13 20:50:14.182356 systemd-logind[1418]: Removed session 4. Feb 13 20:50:14.217435 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 39688 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:50:14.219074 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:50:14.223624 systemd-logind[1418]: New session 5 of user core. Feb 13 20:50:14.233235 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:50:14.301731 sudo[1572]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:50:14.302009 sudo[1572]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:50:14.664340 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:50:14.664407 (dockerd)[1591]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:50:14.985889 dockerd[1591]: time="2025-02-13T20:50:14.985489295Z" level=info msg="Starting up" Feb 13 20:50:15.270309 dockerd[1591]: time="2025-02-13T20:50:15.270192975Z" level=info msg="Loading containers: start." Feb 13 20:50:15.353121 kernel: Initializing XFRM netlink socket Feb 13 20:50:15.415958 systemd-networkd[1370]: docker0: Link UP Feb 13 20:50:15.433463 dockerd[1591]: time="2025-02-13T20:50:15.433399015Z" level=info msg="Loading containers: done." Feb 13 20:50:15.449956 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2752974659-merged.mount: Deactivated successfully. Feb 13 20:50:15.451299 dockerd[1591]: time="2025-02-13T20:50:15.451255575Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:50:15.451374 dockerd[1591]: time="2025-02-13T20:50:15.451357575Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:50:15.451483 dockerd[1591]: time="2025-02-13T20:50:15.451467495Z" level=info msg="Daemon has completed initialization" Feb 13 20:50:15.480130 dockerd[1591]: time="2025-02-13T20:50:15.479956535Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:50:15.480269 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:50:15.955185 containerd[1434]: time="2025-02-13T20:50:15.955123975Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 20:50:16.621252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount380606126.mount: Deactivated successfully. Feb 13 20:50:18.506218 containerd[1434]: time="2025-02-13T20:50:18.506155215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:18.506666 containerd[1434]: time="2025-02-13T20:50:18.506632975Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218238" Feb 13 20:50:18.507549 containerd[1434]: time="2025-02-13T20:50:18.507519495Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:18.510473 containerd[1434]: time="2025-02-13T20:50:18.510427095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:18.511666 containerd[1434]: time="2025-02-13T20:50:18.511637615Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 2.55647448s" Feb 13 20:50:18.511703 containerd[1434]: time="2025-02-13T20:50:18.511668615Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\"" Feb 13 20:50:18.512626 containerd[1434]: time="2025-02-13T20:50:18.512587295Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 20:50:19.793921 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:50:19.808342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:19.902969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:19.907317 (kubelet)[1801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:50:20.002076 kubelet[1801]: E0213 20:50:20.002022 1801 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:50:20.005462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:50:20.005607 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:50:20.283350 containerd[1434]: time="2025-02-13T20:50:20.283226335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:20.283881 containerd[1434]: time="2025-02-13T20:50:20.283846215Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528147" Feb 13 20:50:20.285401 containerd[1434]: time="2025-02-13T20:50:20.284762575Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:20.287823 containerd[1434]: time="2025-02-13T20:50:20.287776415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:20.289969 containerd[1434]: time="2025-02-13T20:50:20.289929535Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 1.77730744s" Feb 13 20:50:20.290189 containerd[1434]: time="2025-02-13T20:50:20.290057455Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\"" Feb 13 20:50:20.290700 containerd[1434]: time="2025-02-13T20:50:20.290636175Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 20:50:21.862359 containerd[1434]: time="2025-02-13T20:50:21.862304895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:21.863212 containerd[1434]: time="2025-02-13T20:50:21.863186575Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480802" Feb 13 20:50:21.864094 containerd[1434]: time="2025-02-13T20:50:21.864047455Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:21.867325 containerd[1434]: time="2025-02-13T20:50:21.867283055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:21.868371 containerd[1434]: time="2025-02-13T20:50:21.868289255Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 1.57761988s" Feb 13 20:50:21.868371 containerd[1434]: time="2025-02-13T20:50:21.868325175Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\"" Feb 13 20:50:21.868789 containerd[1434]: time="2025-02-13T20:50:21.868772735Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 20:50:23.142021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount268921257.mount: Deactivated successfully. Feb 13 20:50:23.410008 containerd[1434]: time="2025-02-13T20:50:23.409886295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:23.410551 containerd[1434]: time="2025-02-13T20:50:23.410506975Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363384" Feb 13 20:50:23.411193 containerd[1434]: time="2025-02-13T20:50:23.411161855Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:23.413002 containerd[1434]: time="2025-02-13T20:50:23.412964815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:23.413749 containerd[1434]: time="2025-02-13T20:50:23.413696415Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.5448968s" Feb 13 20:50:23.413787 containerd[1434]: time="2025-02-13T20:50:23.413752175Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 20:50:23.414241 containerd[1434]: time="2025-02-13T20:50:23.414213215Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 20:50:24.160014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount402413317.mount: Deactivated successfully. Feb 13 20:50:25.237340 containerd[1434]: time="2025-02-13T20:50:25.237267815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:25.238531 containerd[1434]: time="2025-02-13T20:50:25.238492655Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Feb 13 20:50:25.239288 containerd[1434]: time="2025-02-13T20:50:25.239256135Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:25.242864 containerd[1434]: time="2025-02-13T20:50:25.242816175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:25.245913 containerd[1434]: time="2025-02-13T20:50:25.244402975Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.83015428s" Feb 13 20:50:25.245913 containerd[1434]: time="2025-02-13T20:50:25.244448295Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Feb 13 20:50:25.245913 containerd[1434]: time="2025-02-13T20:50:25.245614335Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:50:25.734253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2534359124.mount: Deactivated successfully. Feb 13 20:50:25.738393 containerd[1434]: time="2025-02-13T20:50:25.738333895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:25.738850 containerd[1434]: time="2025-02-13T20:50:25.738809895Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 20:50:25.739723 containerd[1434]: time="2025-02-13T20:50:25.739692095Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:25.741887 containerd[1434]: time="2025-02-13T20:50:25.741852935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:25.743028 containerd[1434]: time="2025-02-13T20:50:25.742988935Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 497.3466ms" Feb 13 20:50:25.743072 containerd[1434]: time="2025-02-13T20:50:25.743029855Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 20:50:25.743523 containerd[1434]: time="2025-02-13T20:50:25.743491895Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 20:50:26.353778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2576173255.mount: Deactivated successfully. Feb 13 20:50:29.441239 containerd[1434]: time="2025-02-13T20:50:29.441189855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:29.444335 containerd[1434]: time="2025-02-13T20:50:29.444284735Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Feb 13 20:50:29.445283 containerd[1434]: time="2025-02-13T20:50:29.445250295Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:29.448460 containerd[1434]: time="2025-02-13T20:50:29.448428655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:50:29.449768 containerd[1434]: time="2025-02-13T20:50:29.449735335Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.70612612s" Feb 13 20:50:29.449819 containerd[1434]: time="2025-02-13T20:50:29.449768535Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Feb 13 20:50:30.044616 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:50:30.055311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:30.144746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:30.147944 (kubelet)[1966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:50:30.179502 kubelet[1966]: E0213 20:50:30.179457 1966 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:50:30.181469 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:50:30.181581 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:50:34.988051 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:35.002322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:35.025942 systemd[1]: Reloading requested from client PID 1981 ('systemctl') (unit session-5.scope)... Feb 13 20:50:35.025955 systemd[1]: Reloading... Feb 13 20:50:35.088116 zram_generator::config[2020]: No configuration found. Feb 13 20:50:35.201576 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:50:35.252913 systemd[1]: Reloading finished in 226 ms. Feb 13 20:50:35.284063 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:50:35.284146 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:50:35.284376 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:35.287401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:35.386980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:35.392555 (kubelet)[2066]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:50:35.429016 kubelet[2066]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:50:35.429016 kubelet[2066]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:50:35.429016 kubelet[2066]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:50:35.429443 kubelet[2066]: I0213 20:50:35.429070 2066 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:50:36.059729 kubelet[2066]: I0213 20:50:36.059693 2066 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:50:36.059729 kubelet[2066]: I0213 20:50:36.059722 2066 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:50:36.059977 kubelet[2066]: I0213 20:50:36.059960 2066 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:50:36.095054 kubelet[2066]: E0213 20:50:36.095021 2066 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:50:36.097849 kubelet[2066]: I0213 20:50:36.097795 2066 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:50:36.107209 kubelet[2066]: E0213 20:50:36.107177 2066 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:50:36.107209 kubelet[2066]: I0213 20:50:36.107202 2066 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:50:36.109685 kubelet[2066]: I0213 20:50:36.109665 2066 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:50:36.110867 kubelet[2066]: I0213 20:50:36.110819 2066 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:50:36.111025 kubelet[2066]: I0213 20:50:36.110864 2066 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:50:36.111115 kubelet[2066]: I0213 20:50:36.111102 2066 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:50:36.111115 kubelet[2066]: I0213 20:50:36.111111 2066 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:50:36.111325 kubelet[2066]: I0213 20:50:36.111289 2066 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:50:36.113711 kubelet[2066]: I0213 20:50:36.113667 2066 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:50:36.113711 kubelet[2066]: I0213 20:50:36.113688 2066 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:50:36.113711 kubelet[2066]: I0213 20:50:36.113710 2066 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:50:36.113711 kubelet[2066]: I0213 20:50:36.113719 2066 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:50:36.116562 kubelet[2066]: W0213 20:50:36.116500 2066 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:50:36.116562 kubelet[2066]: E0213 20:50:36.116558 2066 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:50:36.118280 kubelet[2066]: W0213 20:50:36.117393 2066 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:50:36.118280 kubelet[2066]: E0213 20:50:36.117443 2066 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:50:36.120672 kubelet[2066]: I0213 20:50:36.120643 2066 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:50:36.122132 kubelet[2066]: I0213 20:50:36.122110 2066 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:50:36.122381 kubelet[2066]: W0213 20:50:36.122366 2066 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:50:36.123235 kubelet[2066]: I0213 20:50:36.123213 2066 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:50:36.123356 kubelet[2066]: I0213 20:50:36.123344 2066 server.go:1287] "Started kubelet" Feb 13 20:50:36.124549 kubelet[2066]: I0213 20:50:36.124514 2066 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:50:36.125952 kubelet[2066]: I0213 20:50:36.125926 2066 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:50:36.127053 kubelet[2066]: I0213 20:50:36.126996 2066 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:50:36.127247 kubelet[2066]: I0213 20:50:36.127230 2066 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:50:36.127440 kubelet[2066]: I0213 20:50:36.124915 2066 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:50:36.127497 kubelet[2066]: I0213 20:50:36.124652 2066 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:50:36.128584 kubelet[2066]: I0213 20:50:36.128555 2066 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:50:36.128854 kubelet[2066]: I0213 20:50:36.128826 2066 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:50:36.128907 kubelet[2066]: I0213 20:50:36.128887 2066 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:50:36.129013 kubelet[2066]: I0213 20:50:36.128984 2066 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:50:36.129184 kubelet[2066]: W0213 20:50:36.129149 2066 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:50:36.129231 kubelet[2066]: E0213 20:50:36.129193 2066 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:50:36.129898 kubelet[2066]: E0213 20:50:36.129874 2066 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:50:36.130052 kubelet[2066]: E0213 20:50:36.130016 2066 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="200ms" Feb 13 20:50:36.130163 kubelet[2066]: I0213 20:50:36.130142 2066 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:50:36.130163 kubelet[2066]: I0213 20:50:36.130158 2066 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:50:36.130163 kubelet[2066]: E0213 20:50:36.130158 2066 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:50:36.130682 kubelet[2066]: E0213 20:50:36.130431 2066 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.9:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.9:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dfb1de15d0ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:50:36.123320575 +0000 UTC m=+0.727566321,LastTimestamp:2025-02-13 20:50:36.123320575 +0000 UTC m=+0.727566321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:50:36.141256 kubelet[2066]: I0213 20:50:36.141184 2066 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:50:36.142445 kubelet[2066]: I0213 20:50:36.142420 2066 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:50:36.142445 kubelet[2066]: I0213 20:50:36.142443 2066 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:50:36.142699 kubelet[2066]: I0213 20:50:36.142459 2066 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:50:36.142699 kubelet[2066]: I0213 20:50:36.142466 2066 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:50:36.142699 kubelet[2066]: E0213 20:50:36.142510 2066 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:50:36.143451 kubelet[2066]: W0213 20:50:36.143393 2066 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:50:36.143539 kubelet[2066]: E0213 20:50:36.143469 2066 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:50:36.144329 kubelet[2066]: I0213 20:50:36.144082 2066 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:50:36.144329 kubelet[2066]: I0213 20:50:36.144237 2066 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:50:36.144329 kubelet[2066]: I0213 20:50:36.144259 2066 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:50:36.214813 kubelet[2066]: I0213 20:50:36.214768 2066 policy_none.go:49] "None policy: Start" Feb 13 20:50:36.214813 kubelet[2066]: I0213 20:50:36.214801 2066 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:50:36.214813 kubelet[2066]: I0213 20:50:36.214813 2066 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:50:36.219881 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:50:36.230025 kubelet[2066]: E0213 20:50:36.229972 2066 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:50:36.233726 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:50:36.236316 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:50:36.243408 kubelet[2066]: E0213 20:50:36.243030 2066 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:50:36.247967 kubelet[2066]: I0213 20:50:36.247923 2066 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:50:36.248314 kubelet[2066]: I0213 20:50:36.248298 2066 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:50:36.248381 kubelet[2066]: I0213 20:50:36.248316 2066 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:50:36.248609 kubelet[2066]: I0213 20:50:36.248514 2066 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:50:36.249215 kubelet[2066]: E0213 20:50:36.249186 2066 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:50:36.249262 kubelet[2066]: E0213 20:50:36.249231 2066 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 20:50:36.331174 kubelet[2066]: E0213 20:50:36.331060 2066 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="400ms" Feb 13 20:50:36.350114 kubelet[2066]: I0213 20:50:36.350065 2066 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:50:36.350503 kubelet[2066]: E0213 20:50:36.350465 2066 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Feb 13 20:50:36.454314 systemd[1]: Created slice kubepods-burstable-pod85d162220784a5775b8df2b5cc37dc7d.slice - libcontainer container kubepods-burstable-pod85d162220784a5775b8df2b5cc37dc7d.slice. Feb 13 20:50:36.478296 kubelet[2066]: E0213 20:50:36.478251 2066 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:50:36.481833 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 20:50:36.492063 kubelet[2066]: E0213 20:50:36.492037 2066 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:50:36.494128 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 20:50:36.495412 kubelet[2066]: E0213 20:50:36.495392 2066 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:50:36.531910 kubelet[2066]: I0213 20:50:36.531681 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85d162220784a5775b8df2b5cc37dc7d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"85d162220784a5775b8df2b5cc37dc7d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:36.531910 kubelet[2066]: I0213 20:50:36.531717 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:36.531910 kubelet[2066]: I0213 20:50:36.531737 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:36.531910 kubelet[2066]: I0213 20:50:36.531755 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:36.531910 kubelet[2066]: I0213 20:50:36.531773 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:50:36.532130 kubelet[2066]: I0213 20:50:36.531787 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85d162220784a5775b8df2b5cc37dc7d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"85d162220784a5775b8df2b5cc37dc7d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:36.532130 kubelet[2066]: I0213 20:50:36.531802 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85d162220784a5775b8df2b5cc37dc7d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"85d162220784a5775b8df2b5cc37dc7d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:36.532130 kubelet[2066]: I0213 20:50:36.531815 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:36.532130 kubelet[2066]: I0213 20:50:36.531830 2066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:36.551567 kubelet[2066]: I0213 20:50:36.551507 2066 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:50:36.551830 kubelet[2066]: E0213 20:50:36.551799 2066 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Feb 13 20:50:36.732540 kubelet[2066]: E0213 20:50:36.732417 2066 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="800ms" Feb 13 20:50:36.779744 kubelet[2066]: E0213 20:50:36.779718 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:36.780441 containerd[1434]: time="2025-02-13T20:50:36.780396495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:85d162220784a5775b8df2b5cc37dc7d,Namespace:kube-system,Attempt:0,}" Feb 13 20:50:36.793189 kubelet[2066]: E0213 20:50:36.793154 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:36.793618 containerd[1434]: time="2025-02-13T20:50:36.793580455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 20:50:36.796271 kubelet[2066]: E0213 20:50:36.796181 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:36.796927 containerd[1434]: time="2025-02-13T20:50:36.796570215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 20:50:36.953928 kubelet[2066]: I0213 20:50:36.953892 2066 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:50:36.954229 kubelet[2066]: E0213 20:50:36.954193 2066 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Feb 13 20:50:37.225908 kubelet[2066]: W0213 20:50:37.225845 2066 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:50:37.226004 kubelet[2066]: E0213 20:50:37.225914 2066 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:50:37.227647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1963749305.mount: Deactivated successfully. Feb 13 20:50:37.229652 kubelet[2066]: W0213 20:50:37.229565 2066 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:50:37.229652 kubelet[2066]: E0213 20:50:37.229616 2066 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:50:37.232989 containerd[1434]: time="2025-02-13T20:50:37.232934415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:50:37.234216 containerd[1434]: time="2025-02-13T20:50:37.234181695Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:50:37.234633 containerd[1434]: time="2025-02-13T20:50:37.234599135Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:50:37.235462 containerd[1434]: time="2025-02-13T20:50:37.235425695Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 20:50:37.236818 containerd[1434]: time="2025-02-13T20:50:37.236782855Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:50:37.237940 containerd[1434]: time="2025-02-13T20:50:37.237906735Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:50:37.238033 containerd[1434]: time="2025-02-13T20:50:37.237999575Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:50:37.241597 containerd[1434]: time="2025-02-13T20:50:37.241151695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:50:37.241597 containerd[1434]: time="2025-02-13T20:50:37.241176055Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 447.51928ms" Feb 13 20:50:37.244685 containerd[1434]: time="2025-02-13T20:50:37.244648695Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 464.17168ms" Feb 13 20:50:37.245845 containerd[1434]: time="2025-02-13T20:50:37.245802375Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 449.05992ms" Feb 13 20:50:37.383978 containerd[1434]: time="2025-02-13T20:50:37.383780535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:50:37.383978 containerd[1434]: time="2025-02-13T20:50:37.383827015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:50:37.383978 containerd[1434]: time="2025-02-13T20:50:37.383837895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:37.383978 containerd[1434]: time="2025-02-13T20:50:37.383907255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:37.384212 containerd[1434]: time="2025-02-13T20:50:37.384040335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:50:37.385868 containerd[1434]: time="2025-02-13T20:50:37.385693375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:50:37.385868 containerd[1434]: time="2025-02-13T20:50:37.385717415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:37.385868 containerd[1434]: time="2025-02-13T20:50:37.385801855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:37.385868 containerd[1434]: time="2025-02-13T20:50:37.383733935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:50:37.385868 containerd[1434]: time="2025-02-13T20:50:37.385452015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:50:37.385868 containerd[1434]: time="2025-02-13T20:50:37.385486775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:37.385868 containerd[1434]: time="2025-02-13T20:50:37.385614495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:37.405267 systemd[1]: Started cri-containerd-0e57088ea398100b77ab47a27d5b271277eb061b158f6d445b9a7baef01fc5ed.scope - libcontainer container 0e57088ea398100b77ab47a27d5b271277eb061b158f6d445b9a7baef01fc5ed. Feb 13 20:50:37.409397 systemd[1]: Started cri-containerd-517a1e22be0d0b33a802b56af3d3bdabcdb283d4f4da4ff193ff778e721e127c.scope - libcontainer container 517a1e22be0d0b33a802b56af3d3bdabcdb283d4f4da4ff193ff778e721e127c. Feb 13 20:50:37.410393 systemd[1]: Started cri-containerd-68d6d2de6e5e9848dd62bcdaae49227896b34567ae3b768f0b5ec19d1b92bb2e.scope - libcontainer container 68d6d2de6e5e9848dd62bcdaae49227896b34567ae3b768f0b5ec19d1b92bb2e. Feb 13 20:50:37.435954 containerd[1434]: time="2025-02-13T20:50:37.435806775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:85d162220784a5775b8df2b5cc37dc7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e57088ea398100b77ab47a27d5b271277eb061b158f6d445b9a7baef01fc5ed\"" Feb 13 20:50:37.436800 kubelet[2066]: E0213 20:50:37.436721 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:37.441153 containerd[1434]: time="2025-02-13T20:50:37.440116655Z" level=info msg="CreateContainer within sandbox \"0e57088ea398100b77ab47a27d5b271277eb061b158f6d445b9a7baef01fc5ed\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:50:37.445508 containerd[1434]: time="2025-02-13T20:50:37.445475175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"68d6d2de6e5e9848dd62bcdaae49227896b34567ae3b768f0b5ec19d1b92bb2e\"" Feb 13 20:50:37.446005 containerd[1434]: time="2025-02-13T20:50:37.445925935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"517a1e22be0d0b33a802b56af3d3bdabcdb283d4f4da4ff193ff778e721e127c\"" Feb 13 20:50:37.446386 kubelet[2066]: E0213 20:50:37.446239 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:37.446983 kubelet[2066]: E0213 20:50:37.446749 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:37.447865 containerd[1434]: time="2025-02-13T20:50:37.447815375Z" level=info msg="CreateContainer within sandbox \"68d6d2de6e5e9848dd62bcdaae49227896b34567ae3b768f0b5ec19d1b92bb2e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:50:37.448447 containerd[1434]: time="2025-02-13T20:50:37.448420175Z" level=info msg="CreateContainer within sandbox \"517a1e22be0d0b33a802b56af3d3bdabcdb283d4f4da4ff193ff778e721e127c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:50:37.455906 containerd[1434]: time="2025-02-13T20:50:37.455849335Z" level=info msg="CreateContainer within sandbox \"0e57088ea398100b77ab47a27d5b271277eb061b158f6d445b9a7baef01fc5ed\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"605bf9f6a538c8238e24d0a7e2f53937042d7cebd641528ee37886692c6996d1\"" Feb 13 20:50:37.456845 containerd[1434]: time="2025-02-13T20:50:37.456812895Z" level=info msg="StartContainer for \"605bf9f6a538c8238e24d0a7e2f53937042d7cebd641528ee37886692c6996d1\"" Feb 13 20:50:37.463350 containerd[1434]: time="2025-02-13T20:50:37.463233015Z" level=info msg="CreateContainer within sandbox \"68d6d2de6e5e9848dd62bcdaae49227896b34567ae3b768f0b5ec19d1b92bb2e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"df0a4fba498d52a2ba9cb43dbf3634392b2b764638bbea6efd1fd564efee62c6\"" Feb 13 20:50:37.463982 containerd[1434]: time="2025-02-13T20:50:37.463956055Z" level=info msg="StartContainer for \"df0a4fba498d52a2ba9cb43dbf3634392b2b764638bbea6efd1fd564efee62c6\"" Feb 13 20:50:37.468011 containerd[1434]: time="2025-02-13T20:50:37.467229375Z" level=info msg="CreateContainer within sandbox \"517a1e22be0d0b33a802b56af3d3bdabcdb283d4f4da4ff193ff778e721e127c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fa1e4cb8d4eccd1655c4d6bb0355b2ffab18b3057778c558c6a414700b47ecb8\"" Feb 13 20:50:37.468011 containerd[1434]: time="2025-02-13T20:50:37.467795255Z" level=info msg="StartContainer for \"fa1e4cb8d4eccd1655c4d6bb0355b2ffab18b3057778c558c6a414700b47ecb8\"" Feb 13 20:50:37.484255 systemd[1]: Started cri-containerd-605bf9f6a538c8238e24d0a7e2f53937042d7cebd641528ee37886692c6996d1.scope - libcontainer container 605bf9f6a538c8238e24d0a7e2f53937042d7cebd641528ee37886692c6996d1. Feb 13 20:50:37.487891 systemd[1]: Started cri-containerd-df0a4fba498d52a2ba9cb43dbf3634392b2b764638bbea6efd1fd564efee62c6.scope - libcontainer container df0a4fba498d52a2ba9cb43dbf3634392b2b764638bbea6efd1fd564efee62c6. Feb 13 20:50:37.494532 systemd[1]: Started cri-containerd-fa1e4cb8d4eccd1655c4d6bb0355b2ffab18b3057778c558c6a414700b47ecb8.scope - libcontainer container fa1e4cb8d4eccd1655c4d6bb0355b2ffab18b3057778c558c6a414700b47ecb8. Feb 13 20:50:37.525134 containerd[1434]: time="2025-02-13T20:50:37.525000575Z" level=info msg="StartContainer for \"605bf9f6a538c8238e24d0a7e2f53937042d7cebd641528ee37886692c6996d1\" returns successfully" Feb 13 20:50:37.536578 kubelet[2066]: E0213 20:50:37.536537 2066 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.9:6443: connect: connection refused" interval="1.6s" Feb 13 20:50:37.544672 containerd[1434]: time="2025-02-13T20:50:37.544593815Z" level=info msg="StartContainer for \"df0a4fba498d52a2ba9cb43dbf3634392b2b764638bbea6efd1fd564efee62c6\" returns successfully" Feb 13 20:50:37.544941 containerd[1434]: time="2025-02-13T20:50:37.544618535Z" level=info msg="StartContainer for \"fa1e4cb8d4eccd1655c4d6bb0355b2ffab18b3057778c558c6a414700b47ecb8\" returns successfully" Feb 13 20:50:37.557902 kubelet[2066]: W0213 20:50:37.557841 2066 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:50:37.558115 kubelet[2066]: E0213 20:50:37.557911 2066 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:50:37.558115 kubelet[2066]: W0213 20:50:37.557840 2066 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.9:6443: connect: connection refused Feb 13 20:50:37.558115 kubelet[2066]: E0213 20:50:37.557946 2066 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:50:37.756383 kubelet[2066]: I0213 20:50:37.755748 2066 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:50:37.756930 kubelet[2066]: E0213 20:50:37.756869 2066 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.9:6443/api/v1/nodes\": dial tcp 10.0.0.9:6443: connect: connection refused" node="localhost" Feb 13 20:50:38.153234 kubelet[2066]: E0213 20:50:38.152981 2066 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:50:38.153879 kubelet[2066]: E0213 20:50:38.153567 2066 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:50:38.153879 kubelet[2066]: E0213 20:50:38.153669 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:38.154195 kubelet[2066]: E0213 20:50:38.154149 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:38.155884 kubelet[2066]: E0213 20:50:38.155752 2066 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:50:38.155884 kubelet[2066]: E0213 20:50:38.155840 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:39.158106 kubelet[2066]: E0213 20:50:39.158061 2066 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:50:39.158462 kubelet[2066]: E0213 20:50:39.158204 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:39.158742 kubelet[2066]: E0213 20:50:39.158723 2066 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 20:50:39.158841 kubelet[2066]: E0213 20:50:39.158826 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:39.359136 kubelet[2066]: I0213 20:50:39.358828 2066 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:50:39.602922 kubelet[2066]: E0213 20:50:39.602876 2066 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 20:50:39.683550 kubelet[2066]: I0213 20:50:39.683502 2066 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 20:50:39.683550 kubelet[2066]: E0213 20:50:39.683534 2066 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 20:50:39.699365 kubelet[2066]: E0213 20:50:39.699323 2066 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:50:39.830299 kubelet[2066]: I0213 20:50:39.830248 2066 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 20:50:39.838774 kubelet[2066]: E0213 20:50:39.838742 2066 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 13 20:50:39.838774 kubelet[2066]: I0213 20:50:39.838766 2066 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:39.840290 kubelet[2066]: E0213 20:50:39.840253 2066 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:39.840290 kubelet[2066]: I0213 20:50:39.840281 2066 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:39.841827 kubelet[2066]: E0213 20:50:39.841795 2066 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:40.118266 kubelet[2066]: I0213 20:50:40.118233 2066 apiserver.go:52] "Watching apiserver" Feb 13 20:50:40.129928 kubelet[2066]: I0213 20:50:40.129872 2066 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:50:40.516057 kubelet[2066]: I0213 20:50:40.515826 2066 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 20:50:40.517927 kubelet[2066]: E0213 20:50:40.517896 2066 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 13 20:50:40.518068 kubelet[2066]: E0213 20:50:40.518048 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:41.884619 systemd[1]: Reloading requested from client PID 2341 ('systemctl') (unit session-5.scope)... Feb 13 20:50:41.884637 systemd[1]: Reloading... Feb 13 20:50:41.906694 kubelet[2066]: I0213 20:50:41.906655 2066 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:41.912846 kubelet[2066]: E0213 20:50:41.912749 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:41.945158 zram_generator::config[2381]: No configuration found. Feb 13 20:50:42.161644 kubelet[2066]: E0213 20:50:42.161316 2066 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:42.162468 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:50:42.226624 systemd[1]: Reloading finished in 341 ms. Feb 13 20:50:42.261491 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:42.280385 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:50:42.280570 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:42.280608 systemd[1]: kubelet.service: Consumed 1.114s CPU time, 125.5M memory peak, 0B memory swap peak. Feb 13 20:50:42.292522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:50:42.386322 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:50:42.390473 (kubelet)[2422]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:50:42.431211 kubelet[2422]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:50:42.431211 kubelet[2422]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:50:42.431211 kubelet[2422]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:50:42.431211 kubelet[2422]: I0213 20:50:42.430948 2422 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:50:42.437159 kubelet[2422]: I0213 20:50:42.437122 2422 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:50:42.437159 kubelet[2422]: I0213 20:50:42.437151 2422 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:50:42.437401 kubelet[2422]: I0213 20:50:42.437377 2422 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:50:42.438697 kubelet[2422]: I0213 20:50:42.438674 2422 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:50:42.441204 kubelet[2422]: I0213 20:50:42.441118 2422 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:50:42.443741 kubelet[2422]: E0213 20:50:42.443698 2422 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:50:42.443741 kubelet[2422]: I0213 20:50:42.443734 2422 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:50:42.446754 kubelet[2422]: I0213 20:50:42.446733 2422 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:50:42.447278 kubelet[2422]: I0213 20:50:42.447082 2422 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:50:42.447278 kubelet[2422]: I0213 20:50:42.447119 2422 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:50:42.447400 kubelet[2422]: I0213 20:50:42.447295 2422 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:50:42.447400 kubelet[2422]: I0213 20:50:42.447304 2422 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:50:42.447400 kubelet[2422]: I0213 20:50:42.447347 2422 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:50:42.447495 kubelet[2422]: I0213 20:50:42.447476 2422 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:50:42.447495 kubelet[2422]: I0213 20:50:42.447494 2422 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:50:42.447553 kubelet[2422]: I0213 20:50:42.447513 2422 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:50:42.447553 kubelet[2422]: I0213 20:50:42.447522 2422 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:50:42.448420 kubelet[2422]: I0213 20:50:42.448316 2422 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:50:42.448771 kubelet[2422]: I0213 20:50:42.448746 2422 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:50:42.449200 kubelet[2422]: I0213 20:50:42.449161 2422 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:50:42.449253 kubelet[2422]: I0213 20:50:42.449205 2422 server.go:1287] "Started kubelet" Feb 13 20:50:42.449538 kubelet[2422]: I0213 20:50:42.449503 2422 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:50:42.450486 kubelet[2422]: I0213 20:50:42.450461 2422 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:50:42.450688 kubelet[2422]: I0213 20:50:42.450674 2422 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:50:42.451372 kubelet[2422]: I0213 20:50:42.451296 2422 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:50:42.451531 kubelet[2422]: I0213 20:50:42.451482 2422 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:50:42.451872 kubelet[2422]: I0213 20:50:42.451678 2422 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:50:42.453378 kubelet[2422]: E0213 20:50:42.453005 2422 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:50:42.453378 kubelet[2422]: I0213 20:50:42.453054 2422 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:50:42.453378 kubelet[2422]: I0213 20:50:42.453139 2422 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:50:42.453378 kubelet[2422]: I0213 20:50:42.453245 2422 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:50:42.454124 kubelet[2422]: I0213 20:50:42.453941 2422 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:50:42.454124 kubelet[2422]: I0213 20:50:42.454034 2422 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:50:42.454423 kubelet[2422]: E0213 20:50:42.454378 2422 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:50:42.455621 kubelet[2422]: I0213 20:50:42.455592 2422 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:50:42.465358 kubelet[2422]: I0213 20:50:42.465311 2422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:50:42.468617 kubelet[2422]: I0213 20:50:42.468568 2422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:50:42.468617 kubelet[2422]: I0213 20:50:42.468601 2422 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:50:42.468617 kubelet[2422]: I0213 20:50:42.468622 2422 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:50:42.468617 kubelet[2422]: I0213 20:50:42.468633 2422 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:50:42.468785 kubelet[2422]: E0213 20:50:42.468676 2422 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:50:42.508736 kubelet[2422]: I0213 20:50:42.508645 2422 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:50:42.508736 kubelet[2422]: I0213 20:50:42.508665 2422 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:50:42.508736 kubelet[2422]: I0213 20:50:42.508681 2422 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:50:42.508899 kubelet[2422]: I0213 20:50:42.508847 2422 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:50:42.508899 kubelet[2422]: I0213 20:50:42.508859 2422 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:50:42.508899 kubelet[2422]: I0213 20:50:42.508877 2422 policy_none.go:49] "None policy: Start" Feb 13 20:50:42.508899 kubelet[2422]: I0213 20:50:42.508885 2422 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:50:42.508899 kubelet[2422]: I0213 20:50:42.508893 2422 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:50:42.509000 kubelet[2422]: I0213 20:50:42.508983 2422 state_mem.go:75] "Updated machine memory state" Feb 13 20:50:42.512305 kubelet[2422]: I0213 20:50:42.512281 2422 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:50:42.512480 kubelet[2422]: I0213 20:50:42.512458 2422 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:50:42.512555 kubelet[2422]: I0213 20:50:42.512478 2422 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:50:42.513836 kubelet[2422]: I0213 20:50:42.512979 2422 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:50:42.513836 kubelet[2422]: E0213 20:50:42.513405 2422 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:50:42.569791 kubelet[2422]: I0213 20:50:42.569755 2422 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:42.569986 kubelet[2422]: I0213 20:50:42.569926 2422 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 20:50:42.569986 kubelet[2422]: I0213 20:50:42.569789 2422 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:42.582036 kubelet[2422]: E0213 20:50:42.581988 2422 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:42.616478 kubelet[2422]: I0213 20:50:42.616447 2422 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 20:50:42.633343 kubelet[2422]: I0213 20:50:42.633193 2422 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 20:50:42.633343 kubelet[2422]: I0213 20:50:42.633298 2422 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 20:50:42.654875 kubelet[2422]: I0213 20:50:42.654810 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:42.654875 kubelet[2422]: I0213 20:50:42.654850 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:42.654875 kubelet[2422]: I0213 20:50:42.654873 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85d162220784a5775b8df2b5cc37dc7d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"85d162220784a5775b8df2b5cc37dc7d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:42.655056 kubelet[2422]: I0213 20:50:42.654890 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:42.655056 kubelet[2422]: I0213 20:50:42.654916 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:42.655056 kubelet[2422]: I0213 20:50:42.654938 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:50:42.655056 kubelet[2422]: I0213 20:50:42.654952 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85d162220784a5775b8df2b5cc37dc7d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"85d162220784a5775b8df2b5cc37dc7d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:42.655056 kubelet[2422]: I0213 20:50:42.654968 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85d162220784a5775b8df2b5cc37dc7d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"85d162220784a5775b8df2b5cc37dc7d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:42.655197 kubelet[2422]: I0213 20:50:42.654984 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:42.877315 kubelet[2422]: E0213 20:50:42.877190 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:42.881101 kubelet[2422]: E0213 20:50:42.881063 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:42.883443 kubelet[2422]: E0213 20:50:42.883299 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:43.448019 kubelet[2422]: I0213 20:50:43.447973 2422 apiserver.go:52] "Watching apiserver" Feb 13 20:50:43.454018 kubelet[2422]: I0213 20:50:43.453993 2422 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:50:43.496097 kubelet[2422]: I0213 20:50:43.496048 2422 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:43.496097 kubelet[2422]: I0213 20:50:43.496067 2422 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 20:50:43.496748 kubelet[2422]: I0213 20:50:43.496299 2422 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:43.503167 kubelet[2422]: E0213 20:50:43.503055 2422 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:50:43.503265 kubelet[2422]: E0213 20:50:43.503249 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:43.504658 kubelet[2422]: E0213 20:50:43.504615 2422 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 20:50:43.504751 kubelet[2422]: E0213 20:50:43.504734 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:43.504810 kubelet[2422]: E0213 20:50:43.504796 2422 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 20:50:43.504903 kubelet[2422]: E0213 20:50:43.504871 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:43.540117 kubelet[2422]: I0213 20:50:43.537036 2422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.537016018 podStartE2EDuration="1.537016018s" podCreationTimestamp="2025-02-13 20:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:50:43.524825428 +0000 UTC m=+1.130867649" watchObservedRunningTime="2025-02-13 20:50:43.537016018 +0000 UTC m=+1.143058239" Feb 13 20:50:43.549686 kubelet[2422]: I0213 20:50:43.549634 2422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.549616091 podStartE2EDuration="2.549616091s" podCreationTimestamp="2025-02-13 20:50:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:50:43.540301527 +0000 UTC m=+1.146343748" watchObservedRunningTime="2025-02-13 20:50:43.549616091 +0000 UTC m=+1.155658312" Feb 13 20:50:43.686519 sudo[1572]: pam_unix(sudo:session): session closed for user root Feb 13 20:50:43.689798 sshd[1569]: pam_unix(sshd:session): session closed for user core Feb 13 20:50:43.692149 systemd[1]: sshd@4-10.0.0.9:22-10.0.0.1:39688.service: Deactivated successfully. Feb 13 20:50:43.693652 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:50:43.693816 systemd[1]: session-5.scope: Consumed 7.034s CPU time, 155.4M memory peak, 0B memory swap peak. Feb 13 20:50:43.695080 systemd-logind[1418]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:50:43.696119 systemd-logind[1418]: Removed session 5. Feb 13 20:50:44.499658 kubelet[2422]: E0213 20:50:44.499616 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:44.499980 kubelet[2422]: E0213 20:50:44.499883 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:44.499980 kubelet[2422]: E0213 20:50:44.499955 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:46.074700 kubelet[2422]: E0213 20:50:46.074638 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:47.667222 kubelet[2422]: I0213 20:50:47.667191 2422 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:50:47.667573 containerd[1434]: time="2025-02-13T20:50:47.667495345Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:50:47.667871 kubelet[2422]: I0213 20:50:47.667647 2422 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:50:48.451464 kubelet[2422]: E0213 20:50:48.451379 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:48.469954 kubelet[2422]: I0213 20:50:48.469214 2422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.469175548 podStartE2EDuration="6.469175548s" podCreationTimestamp="2025-02-13 20:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:50:43.549977134 +0000 UTC m=+1.156019355" watchObservedRunningTime="2025-02-13 20:50:48.469175548 +0000 UTC m=+6.075217729" Feb 13 20:50:48.507650 kubelet[2422]: E0213 20:50:48.507620 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:48.634813 systemd[1]: Created slice kubepods-besteffort-pod9a4f288b_9698_4251_b36a_e7dc77f2bdcb.slice - libcontainer container kubepods-besteffort-pod9a4f288b_9698_4251_b36a_e7dc77f2bdcb.slice. Feb 13 20:50:48.644570 systemd[1]: Created slice kubepods-burstable-pod21e6622a_36e3_47a8_b025_f56eaad98d84.slice - libcontainer container kubepods-burstable-pod21e6622a_36e3_47a8_b025_f56eaad98d84.slice. Feb 13 20:50:48.688214 kubelet[2422]: I0213 20:50:48.688160 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a4f288b-9698-4251-b36a-e7dc77f2bdcb-lib-modules\") pod \"kube-proxy-sv4bq\" (UID: \"9a4f288b-9698-4251-b36a-e7dc77f2bdcb\") " pod="kube-system/kube-proxy-sv4bq" Feb 13 20:50:48.688214 kubelet[2422]: I0213 20:50:48.688212 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/21e6622a-36e3-47a8-b025-f56eaad98d84-cni-plugin\") pod \"kube-flannel-ds-chdvq\" (UID: \"21e6622a-36e3-47a8-b025-f56eaad98d84\") " pod="kube-flannel/kube-flannel-ds-chdvq" Feb 13 20:50:48.688659 kubelet[2422]: I0213 20:50:48.688237 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w58p2\" (UniqueName: \"kubernetes.io/projected/21e6622a-36e3-47a8-b025-f56eaad98d84-kube-api-access-w58p2\") pod \"kube-flannel-ds-chdvq\" (UID: \"21e6622a-36e3-47a8-b025-f56eaad98d84\") " pod="kube-flannel/kube-flannel-ds-chdvq" Feb 13 20:50:48.688659 kubelet[2422]: I0213 20:50:48.688257 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/21e6622a-36e3-47a8-b025-f56eaad98d84-flannel-cfg\") pod \"kube-flannel-ds-chdvq\" (UID: \"21e6622a-36e3-47a8-b025-f56eaad98d84\") " pod="kube-flannel/kube-flannel-ds-chdvq" Feb 13 20:50:48.688659 kubelet[2422]: I0213 20:50:48.688280 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r5qz\" (UniqueName: \"kubernetes.io/projected/9a4f288b-9698-4251-b36a-e7dc77f2bdcb-kube-api-access-6r5qz\") pod \"kube-proxy-sv4bq\" (UID: \"9a4f288b-9698-4251-b36a-e7dc77f2bdcb\") " pod="kube-system/kube-proxy-sv4bq" Feb 13 20:50:48.688872 kubelet[2422]: I0213 20:50:48.688318 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21e6622a-36e3-47a8-b025-f56eaad98d84-xtables-lock\") pod \"kube-flannel-ds-chdvq\" (UID: \"21e6622a-36e3-47a8-b025-f56eaad98d84\") " pod="kube-flannel/kube-flannel-ds-chdvq" Feb 13 20:50:48.688872 kubelet[2422]: I0213 20:50:48.688800 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a4f288b-9698-4251-b36a-e7dc77f2bdcb-xtables-lock\") pod \"kube-proxy-sv4bq\" (UID: \"9a4f288b-9698-4251-b36a-e7dc77f2bdcb\") " pod="kube-system/kube-proxy-sv4bq" Feb 13 20:50:48.688872 kubelet[2422]: I0213 20:50:48.688834 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/21e6622a-36e3-47a8-b025-f56eaad98d84-run\") pod \"kube-flannel-ds-chdvq\" (UID: \"21e6622a-36e3-47a8-b025-f56eaad98d84\") " pod="kube-flannel/kube-flannel-ds-chdvq" Feb 13 20:50:48.688872 kubelet[2422]: I0213 20:50:48.688851 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/21e6622a-36e3-47a8-b025-f56eaad98d84-cni\") pod \"kube-flannel-ds-chdvq\" (UID: \"21e6622a-36e3-47a8-b025-f56eaad98d84\") " pod="kube-flannel/kube-flannel-ds-chdvq" Feb 13 20:50:48.689081 kubelet[2422]: I0213 20:50:48.688895 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9a4f288b-9698-4251-b36a-e7dc77f2bdcb-kube-proxy\") pod \"kube-proxy-sv4bq\" (UID: \"9a4f288b-9698-4251-b36a-e7dc77f2bdcb\") " pod="kube-system/kube-proxy-sv4bq" Feb 13 20:50:48.942870 kubelet[2422]: E0213 20:50:48.942815 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:48.943395 containerd[1434]: time="2025-02-13T20:50:48.943352715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sv4bq,Uid:9a4f288b-9698-4251-b36a-e7dc77f2bdcb,Namespace:kube-system,Attempt:0,}" Feb 13 20:50:48.947676 kubelet[2422]: E0213 20:50:48.947639 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:48.948110 containerd[1434]: time="2025-02-13T20:50:48.948062425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-chdvq,Uid:21e6622a-36e3-47a8-b025-f56eaad98d84,Namespace:kube-flannel,Attempt:0,}" Feb 13 20:50:48.968809 containerd[1434]: time="2025-02-13T20:50:48.965624780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:50:48.968809 containerd[1434]: time="2025-02-13T20:50:48.965672700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:50:48.968809 containerd[1434]: time="2025-02-13T20:50:48.965684580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:48.968809 containerd[1434]: time="2025-02-13T20:50:48.965816581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:48.971472 containerd[1434]: time="2025-02-13T20:50:48.971381657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:50:48.971472 containerd[1434]: time="2025-02-13T20:50:48.971440378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:50:48.971472 containerd[1434]: time="2025-02-13T20:50:48.971455818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:48.971596 containerd[1434]: time="2025-02-13T20:50:48.971520698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:50:48.982325 systemd[1]: Started cri-containerd-354b9f2c4d7bf10f517afab10a145f10f704d14e349f867b727ae5bf692d9a65.scope - libcontainer container 354b9f2c4d7bf10f517afab10a145f10f704d14e349f867b727ae5bf692d9a65. Feb 13 20:50:48.989637 systemd[1]: Started cri-containerd-c66200d289a2d2a5285ebecdfa7bfaf1915bd846a9352b8f2a78c2ab175d6aad.scope - libcontainer container c66200d289a2d2a5285ebecdfa7bfaf1915bd846a9352b8f2a78c2ab175d6aad. Feb 13 20:50:49.007400 containerd[1434]: time="2025-02-13T20:50:49.007364209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sv4bq,Uid:9a4f288b-9698-4251-b36a-e7dc77f2bdcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"354b9f2c4d7bf10f517afab10a145f10f704d14e349f867b727ae5bf692d9a65\"" Feb 13 20:50:49.008050 kubelet[2422]: E0213 20:50:49.008028 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:49.010305 containerd[1434]: time="2025-02-13T20:50:49.010275307Z" level=info msg="CreateContainer within sandbox \"354b9f2c4d7bf10f517afab10a145f10f704d14e349f867b727ae5bf692d9a65\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:50:49.021285 containerd[1434]: time="2025-02-13T20:50:49.021220853Z" level=info msg="CreateContainer within sandbox \"354b9f2c4d7bf10f517afab10a145f10f704d14e349f867b727ae5bf692d9a65\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bb7e023d9b2ad92b83ffaf77360c8ed74ceab5dced7312ba15de05cc128a88ff\"" Feb 13 20:50:49.023111 containerd[1434]: time="2025-02-13T20:50:49.021826537Z" level=info msg="StartContainer for \"bb7e023d9b2ad92b83ffaf77360c8ed74ceab5dced7312ba15de05cc128a88ff\"" Feb 13 20:50:49.024589 containerd[1434]: time="2025-02-13T20:50:49.024525914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-chdvq,Uid:21e6622a-36e3-47a8-b025-f56eaad98d84,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"c66200d289a2d2a5285ebecdfa7bfaf1915bd846a9352b8f2a78c2ab175d6aad\"" Feb 13 20:50:49.025101 kubelet[2422]: E0213 20:50:49.025071 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:49.026029 containerd[1434]: time="2025-02-13T20:50:49.025979842Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:50:49.047505 systemd[1]: Started cri-containerd-bb7e023d9b2ad92b83ffaf77360c8ed74ceab5dced7312ba15de05cc128a88ff.scope - libcontainer container bb7e023d9b2ad92b83ffaf77360c8ed74ceab5dced7312ba15de05cc128a88ff. Feb 13 20:50:49.075760 containerd[1434]: time="2025-02-13T20:50:49.075704746Z" level=info msg="StartContainer for \"bb7e023d9b2ad92b83ffaf77360c8ed74ceab5dced7312ba15de05cc128a88ff\" returns successfully" Feb 13 20:50:49.510909 kubelet[2422]: E0213 20:50:49.510852 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:49.618723 kubelet[2422]: E0213 20:50:49.618694 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:49.634389 kubelet[2422]: I0213 20:50:49.634142 2422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sv4bq" podStartSLOduration=1.634125273 podStartE2EDuration="1.634125273s" podCreationTimestamp="2025-02-13 20:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:50:49.519858696 +0000 UTC m=+7.125900917" watchObservedRunningTime="2025-02-13 20:50:49.634125273 +0000 UTC m=+7.240167534" Feb 13 20:50:50.133543 containerd[1434]: time="2025-02-13T20:50:50.133496190Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:50:50.133989 containerd[1434]: time="2025-02-13T20:50:50.133571430Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:50:50.134047 kubelet[2422]: E0213 20:50:50.133695 2422 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:50:50.134047 kubelet[2422]: E0213 20:50:50.133744 2422 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:50:50.134356 kubelet[2422]: E0213 20:50:50.133920 2422 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w58p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-chdvq_kube-flannel(21e6622a-36e3-47a8-b025-f56eaad98d84): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:50:50.135277 kubelet[2422]: E0213 20:50:50.135229 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:50:50.513214 kubelet[2422]: E0213 20:50:50.512734 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:50.513214 kubelet[2422]: E0213 20:50:50.512927 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:50.514079 kubelet[2422]: E0213 20:50:50.514028 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:50:51.514007 kubelet[2422]: E0213 20:50:51.513967 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:50:52.531176 update_engine[1426]: I20250213 20:50:52.531123 1426 update_attempter.cc:509] Updating boot flags... Feb 13 20:50:52.562772 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2745) Feb 13 20:50:52.584138 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2745) Feb 13 20:50:52.611198 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2745) Feb 13 20:50:56.082645 kubelet[2422]: E0213 20:50:56.082614 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:03.469463 kubelet[2422]: E0213 20:51:03.469360 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:03.470734 containerd[1434]: time="2025-02-13T20:51:03.470094647Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:51:04.588255 containerd[1434]: time="2025-02-13T20:51:04.588193721Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:51:04.588643 containerd[1434]: time="2025-02-13T20:51:04.588235441Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11108" Feb 13 20:51:04.588676 kubelet[2422]: E0213 20:51:04.588391 2422 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:51:04.588676 kubelet[2422]: E0213 20:51:04.588437 2422 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:51:04.588896 kubelet[2422]: E0213 20:51:04.588524 2422 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w58p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-chdvq_kube-flannel(21e6622a-36e3-47a8-b025-f56eaad98d84): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:51:04.589687 kubelet[2422]: E0213 20:51:04.589646 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:51:08.566738 systemd[1]: Started sshd@5-10.0.0.9:22-10.0.0.1:55462.service - OpenSSH per-connection server daemon (10.0.0.1:55462). Feb 13 20:51:08.605238 sshd[2754]: Accepted publickey for core from 10.0.0.1 port 55462 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:08.606528 sshd[2754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:08.610143 systemd-logind[1418]: New session 6 of user core. Feb 13 20:51:08.622287 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:51:08.733304 sshd[2754]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:08.736481 systemd[1]: sshd@5-10.0.0.9:22-10.0.0.1:55462.service: Deactivated successfully. Feb 13 20:51:08.738544 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:51:08.739184 systemd-logind[1418]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:51:08.740129 systemd-logind[1418]: Removed session 6. Feb 13 20:51:13.743588 systemd[1]: Started sshd@6-10.0.0.9:22-10.0.0.1:53266.service - OpenSSH per-connection server daemon (10.0.0.1:53266). Feb 13 20:51:13.781376 sshd[2770]: Accepted publickey for core from 10.0.0.1 port 53266 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:13.782603 sshd[2770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:13.786156 systemd-logind[1418]: New session 7 of user core. Feb 13 20:51:13.802243 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:51:13.908441 sshd[2770]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:13.911443 systemd[1]: sshd@6-10.0.0.9:22-10.0.0.1:53266.service: Deactivated successfully. Feb 13 20:51:13.912987 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:51:13.913618 systemd-logind[1418]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:51:13.914457 systemd-logind[1418]: Removed session 7. Feb 13 20:51:15.469332 kubelet[2422]: E0213 20:51:15.469298 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:15.469911 kubelet[2422]: E0213 20:51:15.469810 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:51:18.917660 systemd[1]: Started sshd@7-10.0.0.9:22-10.0.0.1:53270.service - OpenSSH per-connection server daemon (10.0.0.1:53270). Feb 13 20:51:18.955235 sshd[2786]: Accepted publickey for core from 10.0.0.1 port 53270 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:18.956424 sshd[2786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:18.960253 systemd-logind[1418]: New session 8 of user core. Feb 13 20:51:18.967228 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:51:19.078192 sshd[2786]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:19.080786 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:51:19.081402 systemd[1]: sshd@7-10.0.0.9:22-10.0.0.1:53270.service: Deactivated successfully. Feb 13 20:51:19.084894 systemd-logind[1418]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:51:19.086013 systemd-logind[1418]: Removed session 8. Feb 13 20:51:24.092383 systemd[1]: Started sshd@8-10.0.0.9:22-10.0.0.1:37592.service - OpenSSH per-connection server daemon (10.0.0.1:37592). Feb 13 20:51:24.130104 sshd[2804]: Accepted publickey for core from 10.0.0.1 port 37592 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:24.131298 sshd[2804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:24.134824 systemd-logind[1418]: New session 9 of user core. Feb 13 20:51:24.146282 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:51:24.250311 sshd[2804]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:24.253686 systemd[1]: sshd@8-10.0.0.9:22-10.0.0.1:37592.service: Deactivated successfully. Feb 13 20:51:24.255195 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:51:24.256705 systemd-logind[1418]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:51:24.257643 systemd-logind[1418]: Removed session 9. Feb 13 20:51:27.469679 kubelet[2422]: E0213 20:51:27.469641 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:27.470933 containerd[1434]: time="2025-02-13T20:51:27.470898525Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:51:28.606682 containerd[1434]: time="2025-02-13T20:51:28.606592701Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:51:28.606682 containerd[1434]: time="2025-02-13T20:51:28.606625461Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11109" Feb 13 20:51:28.607194 kubelet[2422]: E0213 20:51:28.606785 2422 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:51:28.607194 kubelet[2422]: E0213 20:51:28.606831 2422 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:51:28.607454 kubelet[2422]: E0213 20:51:28.606930 2422 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w58p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-chdvq_kube-flannel(21e6622a-36e3-47a8-b025-f56eaad98d84): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:51:28.608163 kubelet[2422]: E0213 20:51:28.608118 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:51:29.259496 systemd[1]: Started sshd@9-10.0.0.9:22-10.0.0.1:37596.service - OpenSSH per-connection server daemon (10.0.0.1:37596). Feb 13 20:51:29.297816 sshd[2820]: Accepted publickey for core from 10.0.0.1 port 37596 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:29.298967 sshd[2820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:29.302712 systemd-logind[1418]: New session 10 of user core. Feb 13 20:51:29.313259 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:51:29.419258 sshd[2820]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:29.421837 systemd[1]: sshd@9-10.0.0.9:22-10.0.0.1:37596.service: Deactivated successfully. Feb 13 20:51:29.423564 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:51:29.424863 systemd-logind[1418]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:51:29.425885 systemd-logind[1418]: Removed session 10. Feb 13 20:51:34.429626 systemd[1]: Started sshd@10-10.0.0.9:22-10.0.0.1:58996.service - OpenSSH per-connection server daemon (10.0.0.1:58996). Feb 13 20:51:34.466784 sshd[2836]: Accepted publickey for core from 10.0.0.1 port 58996 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:34.467900 sshd[2836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:34.472826 systemd-logind[1418]: New session 11 of user core. Feb 13 20:51:34.488280 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:51:34.594011 sshd[2836]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:34.597356 systemd[1]: sshd@10-10.0.0.9:22-10.0.0.1:58996.service: Deactivated successfully. Feb 13 20:51:34.599520 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:51:34.600290 systemd-logind[1418]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:51:34.601280 systemd-logind[1418]: Removed session 11. Feb 13 20:51:39.470398 kubelet[2422]: E0213 20:51:39.470184 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:39.471699 kubelet[2422]: E0213 20:51:39.471640 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:51:39.605164 systemd[1]: Started sshd@11-10.0.0.9:22-10.0.0.1:59012.service - OpenSSH per-connection server daemon (10.0.0.1:59012). Feb 13 20:51:39.642453 sshd[2852]: Accepted publickey for core from 10.0.0.1 port 59012 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:39.643664 sshd[2852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:39.647394 systemd-logind[1418]: New session 12 of user core. Feb 13 20:51:39.654226 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:51:39.760717 sshd[2852]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:39.765230 systemd[1]: sshd@11-10.0.0.9:22-10.0.0.1:59012.service: Deactivated successfully. Feb 13 20:51:39.766801 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:51:39.767539 systemd-logind[1418]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:51:39.768889 systemd-logind[1418]: Removed session 12. Feb 13 20:51:44.775484 systemd[1]: Started sshd@12-10.0.0.9:22-10.0.0.1:49202.service - OpenSSH per-connection server daemon (10.0.0.1:49202). Feb 13 20:51:44.813526 sshd[2870]: Accepted publickey for core from 10.0.0.1 port 49202 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:44.814684 sshd[2870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:44.818417 systemd-logind[1418]: New session 13 of user core. Feb 13 20:51:44.831252 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:51:44.935079 sshd[2870]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:44.939180 systemd[1]: sshd@12-10.0.0.9:22-10.0.0.1:49202.service: Deactivated successfully. Feb 13 20:51:44.940745 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:51:44.941364 systemd-logind[1418]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:51:44.942181 systemd-logind[1418]: Removed session 13. Feb 13 20:51:49.469551 kubelet[2422]: E0213 20:51:49.469476 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:49.945520 systemd[1]: Started sshd@13-10.0.0.9:22-10.0.0.1:49216.service - OpenSSH per-connection server daemon (10.0.0.1:49216). Feb 13 20:51:49.983281 sshd[2887]: Accepted publickey for core from 10.0.0.1 port 49216 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:49.984459 sshd[2887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:49.988006 systemd-logind[1418]: New session 14 of user core. Feb 13 20:51:49.995226 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:51:50.101496 sshd[2887]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:50.104640 systemd[1]: sshd@13-10.0.0.9:22-10.0.0.1:49216.service: Deactivated successfully. Feb 13 20:51:50.107529 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:51:50.108076 systemd-logind[1418]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:51:50.108823 systemd-logind[1418]: Removed session 14. Feb 13 20:51:51.470323 kubelet[2422]: E0213 20:51:51.469698 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:51.470323 kubelet[2422]: E0213 20:51:51.469859 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:51:51.470323 kubelet[2422]: E0213 20:51:51.470267 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:51:55.112766 systemd[1]: Started sshd@14-10.0.0.9:22-10.0.0.1:33398.service - OpenSSH per-connection server daemon (10.0.0.1:33398). Feb 13 20:51:55.150123 sshd[2903]: Accepted publickey for core from 10.0.0.1 port 33398 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:51:55.151362 sshd[2903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:51:55.154890 systemd-logind[1418]: New session 15 of user core. Feb 13 20:51:55.162218 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:51:55.266859 sshd[2903]: pam_unix(sshd:session): session closed for user core Feb 13 20:51:55.270013 systemd[1]: sshd@14-10.0.0.9:22-10.0.0.1:33398.service: Deactivated successfully. Feb 13 20:51:55.271609 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:51:55.272515 systemd-logind[1418]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:51:55.273467 systemd-logind[1418]: Removed session 15. Feb 13 20:52:00.279656 systemd[1]: Started sshd@15-10.0.0.9:22-10.0.0.1:33404.service - OpenSSH per-connection server daemon (10.0.0.1:33404). Feb 13 20:52:00.317514 sshd[2919]: Accepted publickey for core from 10.0.0.1 port 33404 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:00.318660 sshd[2919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:00.322643 systemd-logind[1418]: New session 16 of user core. Feb 13 20:52:00.332258 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:52:00.437903 sshd[2919]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:00.441510 systemd[1]: sshd@15-10.0.0.9:22-10.0.0.1:33404.service: Deactivated successfully. Feb 13 20:52:00.443180 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:52:00.444664 systemd-logind[1418]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:52:00.445457 systemd-logind[1418]: Removed session 16. Feb 13 20:52:03.470110 kubelet[2422]: E0213 20:52:03.469910 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:03.471218 kubelet[2422]: E0213 20:52:03.471153 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:52:05.448707 systemd[1]: Started sshd@16-10.0.0.9:22-10.0.0.1:47240.service - OpenSSH per-connection server daemon (10.0.0.1:47240). Feb 13 20:52:05.486900 sshd[2935]: Accepted publickey for core from 10.0.0.1 port 47240 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:05.488141 sshd[2935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:05.491737 systemd-logind[1418]: New session 17 of user core. Feb 13 20:52:05.498226 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:52:05.601837 sshd[2935]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:05.605187 systemd[1]: sshd@16-10.0.0.9:22-10.0.0.1:47240.service: Deactivated successfully. Feb 13 20:52:05.607633 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:52:05.608479 systemd-logind[1418]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:52:05.609176 systemd-logind[1418]: Removed session 17. Feb 13 20:52:10.618593 systemd[1]: Started sshd@17-10.0.0.9:22-10.0.0.1:47248.service - OpenSSH per-connection server daemon (10.0.0.1:47248). Feb 13 20:52:10.656096 sshd[2951]: Accepted publickey for core from 10.0.0.1 port 47248 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:10.657288 sshd[2951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:10.660595 systemd-logind[1418]: New session 18 of user core. Feb 13 20:52:10.672234 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:52:10.776809 sshd[2951]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:10.779863 systemd[1]: sshd@17-10.0.0.9:22-10.0.0.1:47248.service: Deactivated successfully. Feb 13 20:52:10.781542 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:52:10.783557 systemd-logind[1418]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:52:10.784458 systemd-logind[1418]: Removed session 18. Feb 13 20:52:15.469331 kubelet[2422]: E0213 20:52:15.469295 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:15.787578 systemd[1]: Started sshd@18-10.0.0.9:22-10.0.0.1:47734.service - OpenSSH per-connection server daemon (10.0.0.1:47734). Feb 13 20:52:15.825338 sshd[2967]: Accepted publickey for core from 10.0.0.1 port 47734 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:15.826525 sshd[2967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:15.829892 systemd-logind[1418]: New session 19 of user core. Feb 13 20:52:15.836308 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:52:15.939290 sshd[2967]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:15.942516 systemd[1]: sshd@18-10.0.0.9:22-10.0.0.1:47734.service: Deactivated successfully. Feb 13 20:52:15.944160 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:52:15.945661 systemd-logind[1418]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:52:15.946563 systemd-logind[1418]: Removed session 19. Feb 13 20:52:17.469699 kubelet[2422]: E0213 20:52:17.469669 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:17.471174 containerd[1434]: time="2025-02-13T20:52:17.471134918Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:52:18.598161 containerd[1434]: time="2025-02-13T20:52:18.598107658Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:52:18.598545 containerd[1434]: time="2025-02-13T20:52:18.598116498Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:52:18.598595 kubelet[2422]: E0213 20:52:18.598313 2422 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:52:18.598595 kubelet[2422]: E0213 20:52:18.598358 2422 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:52:18.598821 kubelet[2422]: E0213 20:52:18.598438 2422 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w58p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-chdvq_kube-flannel(21e6622a-36e3-47a8-b025-f56eaad98d84): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:52:18.599898 kubelet[2422]: E0213 20:52:18.599869 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:52:20.950876 systemd[1]: Started sshd@19-10.0.0.9:22-10.0.0.1:47746.service - OpenSSH per-connection server daemon (10.0.0.1:47746). Feb 13 20:52:20.988256 sshd[2985]: Accepted publickey for core from 10.0.0.1 port 47746 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:20.989394 sshd[2985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:20.992763 systemd-logind[1418]: New session 20 of user core. Feb 13 20:52:21.002261 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:52:21.106806 sshd[2985]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:21.110009 systemd[1]: sshd@19-10.0.0.9:22-10.0.0.1:47746.service: Deactivated successfully. Feb 13 20:52:21.111628 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:52:21.112292 systemd-logind[1418]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:52:21.113272 systemd-logind[1418]: Removed session 20. Feb 13 20:52:23.469251 kubelet[2422]: E0213 20:52:23.469208 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:26.116531 systemd[1]: Started sshd@20-10.0.0.9:22-10.0.0.1:38844.service - OpenSSH per-connection server daemon (10.0.0.1:38844). Feb 13 20:52:26.154513 sshd[3001]: Accepted publickey for core from 10.0.0.1 port 38844 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:26.156008 sshd[3001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:26.159560 systemd-logind[1418]: New session 21 of user core. Feb 13 20:52:26.169227 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:52:26.274549 sshd[3001]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:26.277824 systemd[1]: sshd@20-10.0.0.9:22-10.0.0.1:38844.service: Deactivated successfully. Feb 13 20:52:26.279337 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:52:26.280489 systemd-logind[1418]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:52:26.281290 systemd-logind[1418]: Removed session 21. Feb 13 20:52:30.469644 kubelet[2422]: E0213 20:52:30.469586 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:30.470530 kubelet[2422]: E0213 20:52:30.470489 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:52:31.285472 systemd[1]: Started sshd@21-10.0.0.9:22-10.0.0.1:38846.service - OpenSSH per-connection server daemon (10.0.0.1:38846). Feb 13 20:52:31.322705 sshd[3017]: Accepted publickey for core from 10.0.0.1 port 38846 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:31.323875 sshd[3017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:31.327671 systemd-logind[1418]: New session 22 of user core. Feb 13 20:52:31.336225 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:52:31.439900 sshd[3017]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:31.442309 systemd[1]: sshd@21-10.0.0.9:22-10.0.0.1:38846.service: Deactivated successfully. Feb 13 20:52:31.443861 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:52:31.444976 systemd-logind[1418]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:52:31.445866 systemd-logind[1418]: Removed session 22. Feb 13 20:52:36.450623 systemd[1]: Started sshd@22-10.0.0.9:22-10.0.0.1:54076.service - OpenSSH per-connection server daemon (10.0.0.1:54076). Feb 13 20:52:36.488538 sshd[3034]: Accepted publickey for core from 10.0.0.1 port 54076 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:36.489689 sshd[3034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:36.493430 systemd-logind[1418]: New session 23 of user core. Feb 13 20:52:36.504227 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:52:36.608590 sshd[3034]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:36.611972 systemd[1]: sshd@22-10.0.0.9:22-10.0.0.1:54076.service: Deactivated successfully. Feb 13 20:52:36.613612 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:52:36.615039 systemd-logind[1418]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:52:36.616269 systemd-logind[1418]: Removed session 23. Feb 13 20:52:41.620840 systemd[1]: Started sshd@23-10.0.0.9:22-10.0.0.1:54092.service - OpenSSH per-connection server daemon (10.0.0.1:54092). Feb 13 20:52:41.658440 sshd[3049]: Accepted publickey for core from 10.0.0.1 port 54092 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:41.659567 sshd[3049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:41.662759 systemd-logind[1418]: New session 24 of user core. Feb 13 20:52:41.678283 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:52:41.782293 sshd[3049]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:41.785392 systemd[1]: sshd@23-10.0.0.9:22-10.0.0.1:54092.service: Deactivated successfully. Feb 13 20:52:41.787632 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:52:41.788377 systemd-logind[1418]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:52:41.789138 systemd-logind[1418]: Removed session 24. Feb 13 20:52:42.472455 kubelet[2422]: E0213 20:52:42.472415 2422 kubelet_node_status.go:461] "Node not becoming ready in time after startup" Feb 13 20:52:42.538397 kubelet[2422]: E0213 20:52:42.538367 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:52:44.469567 kubelet[2422]: E0213 20:52:44.469452 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:44.470789 kubelet[2422]: E0213 20:52:44.470746 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:52:46.794549 systemd[1]: Started sshd@24-10.0.0.9:22-10.0.0.1:37232.service - OpenSSH per-connection server daemon (10.0.0.1:37232). Feb 13 20:52:46.831763 sshd[3067]: Accepted publickey for core from 10.0.0.1 port 37232 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:46.832947 sshd[3067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:46.836823 systemd-logind[1418]: New session 25 of user core. Feb 13 20:52:46.848298 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:52:46.953719 sshd[3067]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:46.956958 systemd[1]: sshd@24-10.0.0.9:22-10.0.0.1:37232.service: Deactivated successfully. Feb 13 20:52:46.958534 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:52:46.959957 systemd-logind[1418]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:52:46.961191 systemd-logind[1418]: Removed session 25. Feb 13 20:52:47.539783 kubelet[2422]: E0213 20:52:47.539746 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:52:51.964845 systemd[1]: Started sshd@25-10.0.0.9:22-10.0.0.1:37248.service - OpenSSH per-connection server daemon (10.0.0.1:37248). Feb 13 20:52:52.001893 sshd[3085]: Accepted publickey for core from 10.0.0.1 port 37248 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:52.003113 sshd[3085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:52.006381 systemd-logind[1418]: New session 26 of user core. Feb 13 20:52:52.019271 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:52:52.129627 sshd[3085]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:52.133043 systemd[1]: sshd@25-10.0.0.9:22-10.0.0.1:37248.service: Deactivated successfully. Feb 13 20:52:52.135313 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:52:52.136057 systemd-logind[1418]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:52:52.136841 systemd-logind[1418]: Removed session 26. Feb 13 20:52:52.540431 kubelet[2422]: E0213 20:52:52.540393 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:52:55.469585 kubelet[2422]: E0213 20:52:55.469537 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:52:55.470075 kubelet[2422]: E0213 20:52:55.470031 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:52:57.140646 systemd[1]: Started sshd@26-10.0.0.9:22-10.0.0.1:49112.service - OpenSSH per-connection server daemon (10.0.0.1:49112). Feb 13 20:52:57.178593 sshd[3100]: Accepted publickey for core from 10.0.0.1 port 49112 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:52:57.179766 sshd[3100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:52:57.183535 systemd-logind[1418]: New session 27 of user core. Feb 13 20:52:57.192221 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:52:57.294932 sshd[3100]: pam_unix(sshd:session): session closed for user core Feb 13 20:52:57.297702 systemd[1]: sshd@26-10.0.0.9:22-10.0.0.1:49112.service: Deactivated successfully. Feb 13 20:52:57.299227 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:52:57.299836 systemd-logind[1418]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:52:57.300603 systemd-logind[1418]: Removed session 27. Feb 13 20:52:57.541966 kubelet[2422]: E0213 20:52:57.541871 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:01.469588 kubelet[2422]: E0213 20:53:01.469510 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:02.304634 systemd[1]: Started sshd@27-10.0.0.9:22-10.0.0.1:49122.service - OpenSSH per-connection server daemon (10.0.0.1:49122). Feb 13 20:53:02.342094 sshd[3116]: Accepted publickey for core from 10.0.0.1 port 49122 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:02.343250 sshd[3116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:02.346976 systemd-logind[1418]: New session 28 of user core. Feb 13 20:53:02.353222 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:53:02.457708 sshd[3116]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:02.460874 systemd[1]: sshd@27-10.0.0.9:22-10.0.0.1:49122.service: Deactivated successfully. Feb 13 20:53:02.462615 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:53:02.463313 systemd-logind[1418]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:53:02.464227 systemd-logind[1418]: Removed session 28. Feb 13 20:53:02.543357 kubelet[2422]: E0213 20:53:02.543314 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:06.469941 kubelet[2422]: E0213 20:53:06.469587 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:06.470668 kubelet[2422]: E0213 20:53:06.470619 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:53:07.468546 systemd[1]: Started sshd@28-10.0.0.9:22-10.0.0.1:51298.service - OpenSSH per-connection server daemon (10.0.0.1:51298). Feb 13 20:53:07.506403 sshd[3132]: Accepted publickey for core from 10.0.0.1 port 51298 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:07.507500 sshd[3132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:07.511047 systemd-logind[1418]: New session 29 of user core. Feb 13 20:53:07.517231 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:53:07.544353 kubelet[2422]: E0213 20:53:07.544322 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:07.623274 sshd[3132]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:07.626232 systemd[1]: sshd@28-10.0.0.9:22-10.0.0.1:51298.service: Deactivated successfully. Feb 13 20:53:07.627881 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:53:07.628493 systemd-logind[1418]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:53:07.629312 systemd-logind[1418]: Removed session 29. Feb 13 20:53:12.545152 kubelet[2422]: E0213 20:53:12.545081 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:12.634601 systemd[1]: Started sshd@29-10.0.0.9:22-10.0.0.1:54158.service - OpenSSH per-connection server daemon (10.0.0.1:54158). Feb 13 20:53:12.672054 sshd[3147]: Accepted publickey for core from 10.0.0.1 port 54158 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:12.673251 sshd[3147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:12.678710 systemd-logind[1418]: New session 30 of user core. Feb 13 20:53:12.684229 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 20:53:12.788364 sshd[3147]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:12.791272 systemd[1]: sshd@29-10.0.0.9:22-10.0.0.1:54158.service: Deactivated successfully. Feb 13 20:53:12.792800 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 20:53:12.793432 systemd-logind[1418]: Session 30 logged out. Waiting for processes to exit. Feb 13 20:53:12.794418 systemd-logind[1418]: Removed session 30. Feb 13 20:53:17.546656 kubelet[2422]: E0213 20:53:17.546618 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:17.805602 systemd[1]: Started sshd@30-10.0.0.9:22-10.0.0.1:54166.service - OpenSSH per-connection server daemon (10.0.0.1:54166). Feb 13 20:53:17.843341 sshd[3164]: Accepted publickey for core from 10.0.0.1 port 54166 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:17.844497 sshd[3164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:17.848012 systemd-logind[1418]: New session 31 of user core. Feb 13 20:53:17.860253 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 20:53:17.963031 sshd[3164]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:17.965470 systemd[1]: sshd@30-10.0.0.9:22-10.0.0.1:54166.service: Deactivated successfully. Feb 13 20:53:17.967589 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 20:53:17.969190 systemd-logind[1418]: Session 31 logged out. Waiting for processes to exit. Feb 13 20:53:17.969941 systemd-logind[1418]: Removed session 31. Feb 13 20:53:19.469316 kubelet[2422]: E0213 20:53:19.469283 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:19.470294 kubelet[2422]: E0213 20:53:19.469983 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:53:20.469875 kubelet[2422]: E0213 20:53:20.469779 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:22.547599 kubelet[2422]: E0213 20:53:22.547553 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:22.973448 systemd[1]: Started sshd@31-10.0.0.9:22-10.0.0.1:37890.service - OpenSSH per-connection server daemon (10.0.0.1:37890). Feb 13 20:53:23.010609 sshd[3181]: Accepted publickey for core from 10.0.0.1 port 37890 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:23.011742 sshd[3181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:23.014809 systemd-logind[1418]: New session 32 of user core. Feb 13 20:53:23.026294 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 20:53:23.128767 sshd[3181]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:23.132028 systemd[1]: sshd@31-10.0.0.9:22-10.0.0.1:37890.service: Deactivated successfully. Feb 13 20:53:23.134485 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 20:53:23.135245 systemd-logind[1418]: Session 32 logged out. Waiting for processes to exit. Feb 13 20:53:23.136189 systemd-logind[1418]: Removed session 32. Feb 13 20:53:27.469697 kubelet[2422]: E0213 20:53:27.469650 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:27.548370 kubelet[2422]: E0213 20:53:27.548338 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:28.143592 systemd[1]: Started sshd@32-10.0.0.9:22-10.0.0.1:37896.service - OpenSSH per-connection server daemon (10.0.0.1:37896). Feb 13 20:53:28.181362 sshd[3196]: Accepted publickey for core from 10.0.0.1 port 37896 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:28.182503 sshd[3196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:28.185732 systemd-logind[1418]: New session 33 of user core. Feb 13 20:53:28.194244 systemd[1]: Started session-33.scope - Session 33 of User core. Feb 13 20:53:28.300397 sshd[3196]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:28.303322 systemd[1]: sshd@32-10.0.0.9:22-10.0.0.1:37896.service: Deactivated successfully. Feb 13 20:53:28.304982 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 20:53:28.305593 systemd-logind[1418]: Session 33 logged out. Waiting for processes to exit. Feb 13 20:53:28.306342 systemd-logind[1418]: Removed session 33. Feb 13 20:53:31.469298 kubelet[2422]: E0213 20:53:31.469260 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:31.470942 kubelet[2422]: E0213 20:53:31.469964 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:53:32.549037 kubelet[2422]: E0213 20:53:32.548989 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:33.310437 systemd[1]: Started sshd@33-10.0.0.9:22-10.0.0.1:43160.service - OpenSSH per-connection server daemon (10.0.0.1:43160). Feb 13 20:53:33.347660 sshd[3212]: Accepted publickey for core from 10.0.0.1 port 43160 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:33.348827 sshd[3212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:33.352459 systemd-logind[1418]: New session 34 of user core. Feb 13 20:53:33.361225 systemd[1]: Started session-34.scope - Session 34 of User core. Feb 13 20:53:33.465903 sshd[3212]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:33.469030 systemd[1]: sshd@33-10.0.0.9:22-10.0.0.1:43160.service: Deactivated successfully. Feb 13 20:53:33.471258 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 20:53:33.471950 systemd-logind[1418]: Session 34 logged out. Waiting for processes to exit. Feb 13 20:53:33.472734 systemd-logind[1418]: Removed session 34. Feb 13 20:53:37.550627 kubelet[2422]: E0213 20:53:37.550583 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:38.481485 systemd[1]: Started sshd@34-10.0.0.9:22-10.0.0.1:43170.service - OpenSSH per-connection server daemon (10.0.0.1:43170). Feb 13 20:53:38.518805 sshd[3229]: Accepted publickey for core from 10.0.0.1 port 43170 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:38.519898 sshd[3229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:38.523142 systemd-logind[1418]: New session 35 of user core. Feb 13 20:53:38.535217 systemd[1]: Started session-35.scope - Session 35 of User core. Feb 13 20:53:38.641295 sshd[3229]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:38.644600 systemd[1]: sshd@34-10.0.0.9:22-10.0.0.1:43170.service: Deactivated successfully. Feb 13 20:53:38.646235 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 20:53:38.647595 systemd-logind[1418]: Session 35 logged out. Waiting for processes to exit. Feb 13 20:53:38.648552 systemd-logind[1418]: Removed session 35. Feb 13 20:53:42.469728 kubelet[2422]: E0213 20:53:42.469581 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:42.470291 containerd[1434]: time="2025-02-13T20:53:42.470246839Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:53:42.552285 kubelet[2422]: E0213 20:53:42.552241 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:43.469941 kubelet[2422]: E0213 20:53:43.469864 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:43.657718 systemd[1]: Started sshd@35-10.0.0.9:22-10.0.0.1:37938.service - OpenSSH per-connection server daemon (10.0.0.1:37938). Feb 13 20:53:43.695037 sshd[3247]: Accepted publickey for core from 10.0.0.1 port 37938 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:43.696271 sshd[3247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:43.700687 systemd-logind[1418]: New session 36 of user core. Feb 13 20:53:43.711229 systemd[1]: Started session-36.scope - Session 36 of User core. Feb 13 20:53:43.816332 sshd[3247]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:43.819614 systemd[1]: sshd@35-10.0.0.9:22-10.0.0.1:37938.service: Deactivated successfully. Feb 13 20:53:43.821231 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 20:53:43.822655 systemd-logind[1418]: Session 36 logged out. Waiting for processes to exit. Feb 13 20:53:43.823556 systemd-logind[1418]: Removed session 36. Feb 13 20:53:43.844249 containerd[1434]: time="2025-02-13T20:53:43.844131321Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:53:43.844249 containerd[1434]: time="2025-02-13T20:53:43.844194401Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=13144" Feb 13 20:53:43.846413 kubelet[2422]: E0213 20:53:43.844356 2422 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:53:43.846413 kubelet[2422]: E0213 20:53:43.844401 2422 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:53:43.846507 kubelet[2422]: E0213 20:53:43.844487 2422 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w58p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-chdvq_kube-flannel(21e6622a-36e3-47a8-b025-f56eaad98d84): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:53:43.846555 kubelet[2422]: E0213 20:53:43.845729 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:53:47.553701 kubelet[2422]: E0213 20:53:47.553666 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:48.830642 systemd[1]: Started sshd@36-10.0.0.9:22-10.0.0.1:37950.service - OpenSSH per-connection server daemon (10.0.0.1:37950). Feb 13 20:53:48.868384 sshd[3263]: Accepted publickey for core from 10.0.0.1 port 37950 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:48.869536 sshd[3263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:48.872723 systemd-logind[1418]: New session 37 of user core. Feb 13 20:53:48.890206 systemd[1]: Started session-37.scope - Session 37 of User core. Feb 13 20:53:48.997120 sshd[3263]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:48.999504 systemd[1]: sshd@36-10.0.0.9:22-10.0.0.1:37950.service: Deactivated successfully. Feb 13 20:53:49.002485 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 20:53:49.003694 systemd-logind[1418]: Session 37 logged out. Waiting for processes to exit. Feb 13 20:53:49.004548 systemd-logind[1418]: Removed session 37. Feb 13 20:53:52.554629 kubelet[2422]: E0213 20:53:52.554588 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:54.011646 systemd[1]: Started sshd@37-10.0.0.9:22-10.0.0.1:40748.service - OpenSSH per-connection server daemon (10.0.0.1:40748). Feb 13 20:53:54.049079 sshd[3280]: Accepted publickey for core from 10.0.0.1 port 40748 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:54.050393 sshd[3280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:54.053988 systemd-logind[1418]: New session 38 of user core. Feb 13 20:53:54.064228 systemd[1]: Started session-38.scope - Session 38 of User core. Feb 13 20:53:54.171164 sshd[3280]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:54.174342 systemd[1]: sshd@37-10.0.0.9:22-10.0.0.1:40748.service: Deactivated successfully. Feb 13 20:53:54.175945 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 20:53:54.176630 systemd-logind[1418]: Session 38 logged out. Waiting for processes to exit. Feb 13 20:53:54.177349 systemd-logind[1418]: Removed session 38. Feb 13 20:53:55.469986 kubelet[2422]: E0213 20:53:55.469843 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:53:55.470884 kubelet[2422]: E0213 20:53:55.470848 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:53:57.555528 kubelet[2422]: E0213 20:53:57.555467 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:53:59.181696 systemd[1]: Started sshd@38-10.0.0.9:22-10.0.0.1:40762.service - OpenSSH per-connection server daemon (10.0.0.1:40762). Feb 13 20:53:59.218968 sshd[3295]: Accepted publickey for core from 10.0.0.1 port 40762 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:53:59.220191 sshd[3295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:53:59.223612 systemd-logind[1418]: New session 39 of user core. Feb 13 20:53:59.234221 systemd[1]: Started session-39.scope - Session 39 of User core. Feb 13 20:53:59.341686 sshd[3295]: pam_unix(sshd:session): session closed for user core Feb 13 20:53:59.344620 systemd[1]: sshd@38-10.0.0.9:22-10.0.0.1:40762.service: Deactivated successfully. Feb 13 20:53:59.346266 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 20:53:59.346841 systemd-logind[1418]: Session 39 logged out. Waiting for processes to exit. Feb 13 20:53:59.347633 systemd-logind[1418]: Removed session 39. Feb 13 20:54:02.556788 kubelet[2422]: E0213 20:54:02.556736 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:04.351619 systemd[1]: Started sshd@39-10.0.0.9:22-10.0.0.1:45818.service - OpenSSH per-connection server daemon (10.0.0.1:45818). Feb 13 20:54:04.389118 sshd[3311]: Accepted publickey for core from 10.0.0.1 port 45818 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:04.390326 sshd[3311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:04.394078 systemd-logind[1418]: New session 40 of user core. Feb 13 20:54:04.401237 systemd[1]: Started session-40.scope - Session 40 of User core. Feb 13 20:54:04.509011 sshd[3311]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:04.512163 systemd[1]: sshd@39-10.0.0.9:22-10.0.0.1:45818.service: Deactivated successfully. Feb 13 20:54:04.513833 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 20:54:04.514537 systemd-logind[1418]: Session 40 logged out. Waiting for processes to exit. Feb 13 20:54:04.515294 systemd-logind[1418]: Removed session 40. Feb 13 20:54:07.557643 kubelet[2422]: E0213 20:54:07.557602 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:08.471561 kubelet[2422]: E0213 20:54:08.471531 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:08.472148 kubelet[2422]: E0213 20:54:08.472115 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:54:09.523521 systemd[1]: Started sshd@40-10.0.0.9:22-10.0.0.1:45822.service - OpenSSH per-connection server daemon (10.0.0.1:45822). Feb 13 20:54:09.561154 sshd[3326]: Accepted publickey for core from 10.0.0.1 port 45822 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:09.562294 sshd[3326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:09.565885 systemd-logind[1418]: New session 41 of user core. Feb 13 20:54:09.574275 systemd[1]: Started session-41.scope - Session 41 of User core. Feb 13 20:54:09.681462 sshd[3326]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:09.692595 systemd[1]: sshd@40-10.0.0.9:22-10.0.0.1:45822.service: Deactivated successfully. Feb 13 20:54:09.694006 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 20:54:09.696042 systemd-logind[1418]: Session 41 logged out. Waiting for processes to exit. Feb 13 20:54:09.698077 systemd-logind[1418]: Removed session 41. Feb 13 20:54:09.699696 systemd[1]: Started sshd@41-10.0.0.9:22-10.0.0.1:45834.service - OpenSSH per-connection server daemon (10.0.0.1:45834). Feb 13 20:54:09.736460 sshd[3342]: Accepted publickey for core from 10.0.0.1 port 45834 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:09.737602 sshd[3342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:09.741852 systemd-logind[1418]: New session 42 of user core. Feb 13 20:54:09.751218 systemd[1]: Started session-42.scope - Session 42 of User core. Feb 13 20:54:09.894650 sshd[3342]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:09.904723 systemd[1]: sshd@41-10.0.0.9:22-10.0.0.1:45834.service: Deactivated successfully. Feb 13 20:54:09.906516 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 20:54:09.908585 systemd-logind[1418]: Session 42 logged out. Waiting for processes to exit. Feb 13 20:54:09.916346 systemd[1]: Started sshd@42-10.0.0.9:22-10.0.0.1:45838.service - OpenSSH per-connection server daemon (10.0.0.1:45838). Feb 13 20:54:09.917252 systemd-logind[1418]: Removed session 42. Feb 13 20:54:09.950940 sshd[3356]: Accepted publickey for core from 10.0.0.1 port 45838 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:09.952125 sshd[3356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:09.955857 systemd-logind[1418]: New session 43 of user core. Feb 13 20:54:09.965247 systemd[1]: Started session-43.scope - Session 43 of User core. Feb 13 20:54:10.074789 sshd[3356]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:10.078186 systemd[1]: sshd@42-10.0.0.9:22-10.0.0.1:45838.service: Deactivated successfully. Feb 13 20:54:10.079867 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 20:54:10.081057 systemd-logind[1418]: Session 43 logged out. Waiting for processes to exit. Feb 13 20:54:10.082017 systemd-logind[1418]: Removed session 43. Feb 13 20:54:12.559062 kubelet[2422]: E0213 20:54:12.559014 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:15.085582 systemd[1]: Started sshd@43-10.0.0.9:22-10.0.0.1:53704.service - OpenSSH per-connection server daemon (10.0.0.1:53704). Feb 13 20:54:15.123402 sshd[3370]: Accepted publickey for core from 10.0.0.1 port 53704 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:15.124516 sshd[3370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:15.127726 systemd-logind[1418]: New session 44 of user core. Feb 13 20:54:15.134207 systemd[1]: Started session-44.scope - Session 44 of User core. Feb 13 20:54:15.240739 sshd[3370]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:15.243872 systemd[1]: sshd@43-10.0.0.9:22-10.0.0.1:53704.service: Deactivated successfully. Feb 13 20:54:15.246325 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 20:54:15.247210 systemd-logind[1418]: Session 44 logged out. Waiting for processes to exit. Feb 13 20:54:15.248057 systemd-logind[1418]: Removed session 44. Feb 13 20:54:17.560604 kubelet[2422]: E0213 20:54:17.560542 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:19.469408 kubelet[2422]: E0213 20:54:19.469368 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:19.470095 kubelet[2422]: E0213 20:54:19.470049 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:54:20.251701 systemd[1]: Started sshd@44-10.0.0.9:22-10.0.0.1:53710.service - OpenSSH per-connection server daemon (10.0.0.1:53710). Feb 13 20:54:20.289958 sshd[3389]: Accepted publickey for core from 10.0.0.1 port 53710 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:20.291126 sshd[3389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:20.294405 systemd-logind[1418]: New session 45 of user core. Feb 13 20:54:20.306222 systemd[1]: Started session-45.scope - Session 45 of User core. Feb 13 20:54:20.410953 sshd[3389]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:20.414357 systemd[1]: sshd@44-10.0.0.9:22-10.0.0.1:53710.service: Deactivated successfully. Feb 13 20:54:20.415981 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 20:54:20.416604 systemd-logind[1418]: Session 45 logged out. Waiting for processes to exit. Feb 13 20:54:20.417482 systemd-logind[1418]: Removed session 45. Feb 13 20:54:21.469391 kubelet[2422]: E0213 20:54:21.469295 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:22.562099 kubelet[2422]: E0213 20:54:22.562054 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:25.421699 systemd[1]: Started sshd@45-10.0.0.9:22-10.0.0.1:52216.service - OpenSSH per-connection server daemon (10.0.0.1:52216). Feb 13 20:54:25.458970 sshd[3403]: Accepted publickey for core from 10.0.0.1 port 52216 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:25.460199 sshd[3403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:25.464178 systemd-logind[1418]: New session 46 of user core. Feb 13 20:54:25.472225 systemd[1]: Started session-46.scope - Session 46 of User core. Feb 13 20:54:25.577346 sshd[3403]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:25.580489 systemd[1]: sshd@45-10.0.0.9:22-10.0.0.1:52216.service: Deactivated successfully. Feb 13 20:54:25.582175 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 20:54:25.582709 systemd-logind[1418]: Session 46 logged out. Waiting for processes to exit. Feb 13 20:54:25.583460 systemd-logind[1418]: Removed session 46. Feb 13 20:54:27.562849 kubelet[2422]: E0213 20:54:27.562809 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:30.587701 systemd[1]: Started sshd@46-10.0.0.9:22-10.0.0.1:52228.service - OpenSSH per-connection server daemon (10.0.0.1:52228). Feb 13 20:54:30.625222 sshd[3417]: Accepted publickey for core from 10.0.0.1 port 52228 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:30.626594 sshd[3417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:30.629996 systemd-logind[1418]: New session 47 of user core. Feb 13 20:54:30.638236 systemd[1]: Started session-47.scope - Session 47 of User core. Feb 13 20:54:30.744800 sshd[3417]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:30.747973 systemd[1]: sshd@46-10.0.0.9:22-10.0.0.1:52228.service: Deactivated successfully. Feb 13 20:54:30.749611 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 20:54:30.750801 systemd-logind[1418]: Session 47 logged out. Waiting for processes to exit. Feb 13 20:54:30.751602 systemd-logind[1418]: Removed session 47. Feb 13 20:54:31.470141 kubelet[2422]: E0213 20:54:31.469963 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:31.471007 kubelet[2422]: E0213 20:54:31.470963 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:54:32.563502 kubelet[2422]: E0213 20:54:32.563456 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:35.469647 kubelet[2422]: E0213 20:54:35.469610 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:35.755573 systemd[1]: Started sshd@47-10.0.0.9:22-10.0.0.1:41960.service - OpenSSH per-connection server daemon (10.0.0.1:41960). Feb 13 20:54:35.792979 sshd[3432]: Accepted publickey for core from 10.0.0.1 port 41960 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:35.794199 sshd[3432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:35.797690 systemd-logind[1418]: New session 48 of user core. Feb 13 20:54:35.804222 systemd[1]: Started session-48.scope - Session 48 of User core. Feb 13 20:54:35.911143 sshd[3432]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:35.914453 systemd[1]: sshd@47-10.0.0.9:22-10.0.0.1:41960.service: Deactivated successfully. Feb 13 20:54:35.916075 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 20:54:35.917526 systemd-logind[1418]: Session 48 logged out. Waiting for processes to exit. Feb 13 20:54:35.918410 systemd-logind[1418]: Removed session 48. Feb 13 20:54:37.565252 kubelet[2422]: E0213 20:54:37.565171 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:40.925776 systemd[1]: Started sshd@48-10.0.0.9:22-10.0.0.1:41968.service - OpenSSH per-connection server daemon (10.0.0.1:41968). Feb 13 20:54:40.963197 sshd[3446]: Accepted publickey for core from 10.0.0.1 port 41968 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:40.964354 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:40.968005 systemd-logind[1418]: New session 49 of user core. Feb 13 20:54:40.979237 systemd[1]: Started session-49.scope - Session 49 of User core. Feb 13 20:54:41.086225 sshd[3446]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:41.089649 systemd[1]: sshd@48-10.0.0.9:22-10.0.0.1:41968.service: Deactivated successfully. Feb 13 20:54:41.092097 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 20:54:41.092941 systemd-logind[1418]: Session 49 logged out. Waiting for processes to exit. Feb 13 20:54:41.093728 systemd-logind[1418]: Removed session 49. Feb 13 20:54:42.565931 kubelet[2422]: E0213 20:54:42.565894 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:45.469108 kubelet[2422]: E0213 20:54:45.469050 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:45.470164 kubelet[2422]: E0213 20:54:45.469798 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:54:46.096671 systemd[1]: Started sshd@49-10.0.0.9:22-10.0.0.1:42774.service - OpenSSH per-connection server daemon (10.0.0.1:42774). Feb 13 20:54:46.134554 sshd[3462]: Accepted publickey for core from 10.0.0.1 port 42774 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:46.135818 sshd[3462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:46.141507 systemd-logind[1418]: New session 50 of user core. Feb 13 20:54:46.151234 systemd[1]: Started session-50.scope - Session 50 of User core. Feb 13 20:54:46.257747 sshd[3462]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:46.260930 systemd[1]: sshd@49-10.0.0.9:22-10.0.0.1:42774.service: Deactivated successfully. Feb 13 20:54:46.262603 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 20:54:46.263315 systemd-logind[1418]: Session 50 logged out. Waiting for processes to exit. Feb 13 20:54:46.264405 systemd-logind[1418]: Removed session 50. Feb 13 20:54:46.469891 kubelet[2422]: E0213 20:54:46.469729 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:47.566932 kubelet[2422]: E0213 20:54:47.566884 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:51.268711 systemd[1]: Started sshd@50-10.0.0.9:22-10.0.0.1:42784.service - OpenSSH per-connection server daemon (10.0.0.1:42784). Feb 13 20:54:51.306312 sshd[3479]: Accepted publickey for core from 10.0.0.1 port 42784 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:51.307632 sshd[3479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:51.311242 systemd-logind[1418]: New session 51 of user core. Feb 13 20:54:51.321217 systemd[1]: Started session-51.scope - Session 51 of User core. Feb 13 20:54:51.424655 sshd[3479]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:51.428124 systemd[1]: sshd@50-10.0.0.9:22-10.0.0.1:42784.service: Deactivated successfully. Feb 13 20:54:51.430759 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 20:54:51.431718 systemd-logind[1418]: Session 51 logged out. Waiting for processes to exit. Feb 13 20:54:51.432685 systemd-logind[1418]: Removed session 51. Feb 13 20:54:52.567668 kubelet[2422]: E0213 20:54:52.567595 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:54:56.434815 systemd[1]: Started sshd@51-10.0.0.9:22-10.0.0.1:60886.service - OpenSSH per-connection server daemon (10.0.0.1:60886). Feb 13 20:54:56.469902 kubelet[2422]: E0213 20:54:56.469633 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:54:56.470742 kubelet[2422]: E0213 20:54:56.470189 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:54:56.473131 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 60886 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:54:56.478700 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:54:56.489890 systemd-logind[1418]: New session 52 of user core. Feb 13 20:54:56.495806 systemd[1]: Started session-52.scope - Session 52 of User core. Feb 13 20:54:56.602125 sshd[3493]: pam_unix(sshd:session): session closed for user core Feb 13 20:54:56.604785 systemd[1]: sshd@51-10.0.0.9:22-10.0.0.1:60886.service: Deactivated successfully. Feb 13 20:54:56.606400 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 20:54:56.607597 systemd-logind[1418]: Session 52 logged out. Waiting for processes to exit. Feb 13 20:54:56.608345 systemd-logind[1418]: Removed session 52. Feb 13 20:54:57.569252 kubelet[2422]: E0213 20:54:57.569209 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:01.618263 systemd[1]: Started sshd@52-10.0.0.9:22-10.0.0.1:60900.service - OpenSSH per-connection server daemon (10.0.0.1:60900). Feb 13 20:55:01.657584 sshd[3507]: Accepted publickey for core from 10.0.0.1 port 60900 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:01.658738 sshd[3507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:01.662256 systemd-logind[1418]: New session 53 of user core. Feb 13 20:55:01.671223 systemd[1]: Started session-53.scope - Session 53 of User core. Feb 13 20:55:01.777727 sshd[3507]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:01.780820 systemd[1]: sshd@52-10.0.0.9:22-10.0.0.1:60900.service: Deactivated successfully. Feb 13 20:55:01.783487 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 20:55:01.784075 systemd-logind[1418]: Session 53 logged out. Waiting for processes to exit. Feb 13 20:55:01.785019 systemd-logind[1418]: Removed session 53. Feb 13 20:55:02.570252 kubelet[2422]: E0213 20:55:02.570198 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:06.788710 systemd[1]: Started sshd@53-10.0.0.9:22-10.0.0.1:51494.service - OpenSSH per-connection server daemon (10.0.0.1:51494). Feb 13 20:55:06.826004 sshd[3523]: Accepted publickey for core from 10.0.0.1 port 51494 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:06.827162 sshd[3523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:06.830821 systemd-logind[1418]: New session 54 of user core. Feb 13 20:55:06.841285 systemd[1]: Started session-54.scope - Session 54 of User core. Feb 13 20:55:06.949229 sshd[3523]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:06.952506 systemd[1]: sshd@53-10.0.0.9:22-10.0.0.1:51494.service: Deactivated successfully. Feb 13 20:55:06.954191 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 20:55:06.954768 systemd-logind[1418]: Session 54 logged out. Waiting for processes to exit. Feb 13 20:55:06.955705 systemd-logind[1418]: Removed session 54. Feb 13 20:55:07.571769 kubelet[2422]: E0213 20:55:07.571723 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:10.469723 kubelet[2422]: E0213 20:55:10.469529 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:10.470074 kubelet[2422]: E0213 20:55:10.469951 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:10.471205 kubelet[2422]: E0213 20:55:10.471173 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:55:11.959710 systemd[1]: Started sshd@54-10.0.0.9:22-10.0.0.1:51496.service - OpenSSH per-connection server daemon (10.0.0.1:51496). Feb 13 20:55:11.997233 sshd[3538]: Accepted publickey for core from 10.0.0.1 port 51496 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:11.998365 sshd[3538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:12.001677 systemd-logind[1418]: New session 55 of user core. Feb 13 20:55:12.012223 systemd[1]: Started session-55.scope - Session 55 of User core. Feb 13 20:55:12.117654 sshd[3538]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:12.120790 systemd[1]: sshd@54-10.0.0.9:22-10.0.0.1:51496.service: Deactivated successfully. Feb 13 20:55:12.122467 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 20:55:12.123027 systemd-logind[1418]: Session 55 logged out. Waiting for processes to exit. Feb 13 20:55:12.123774 systemd-logind[1418]: Removed session 55. Feb 13 20:55:12.572534 kubelet[2422]: E0213 20:55:12.572476 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:17.128665 systemd[1]: Started sshd@55-10.0.0.9:22-10.0.0.1:39698.service - OpenSSH per-connection server daemon (10.0.0.1:39698). Feb 13 20:55:17.166022 sshd[3553]: Accepted publickey for core from 10.0.0.1 port 39698 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:17.167233 sshd[3553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:17.171039 systemd-logind[1418]: New session 56 of user core. Feb 13 20:55:17.180225 systemd[1]: Started session-56.scope - Session 56 of User core. Feb 13 20:55:17.289830 sshd[3553]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:17.292981 systemd[1]: sshd@55-10.0.0.9:22-10.0.0.1:39698.service: Deactivated successfully. Feb 13 20:55:17.294617 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 20:55:17.296006 systemd-logind[1418]: Session 56 logged out. Waiting for processes to exit. Feb 13 20:55:17.296863 systemd-logind[1418]: Removed session 56. Feb 13 20:55:17.573904 kubelet[2422]: E0213 20:55:17.573841 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:22.300667 systemd[1]: Started sshd@56-10.0.0.9:22-10.0.0.1:39714.service - OpenSSH per-connection server daemon (10.0.0.1:39714). Feb 13 20:55:22.338247 sshd[3569]: Accepted publickey for core from 10.0.0.1 port 39714 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:22.339373 sshd[3569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:22.343047 systemd-logind[1418]: New session 57 of user core. Feb 13 20:55:22.353279 systemd[1]: Started session-57.scope - Session 57 of User core. Feb 13 20:55:22.460695 sshd[3569]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:22.463925 systemd[1]: sshd@56-10.0.0.9:22-10.0.0.1:39714.service: Deactivated successfully. Feb 13 20:55:22.466315 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 20:55:22.467242 systemd-logind[1418]: Session 57 logged out. Waiting for processes to exit. Feb 13 20:55:22.468350 systemd-logind[1418]: Removed session 57. Feb 13 20:55:22.574615 kubelet[2422]: E0213 20:55:22.574513 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:23.469862 kubelet[2422]: E0213 20:55:23.469669 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:23.470562 kubelet[2422]: E0213 20:55:23.470256 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:55:27.471881 systemd[1]: Started sshd@57-10.0.0.9:22-10.0.0.1:36842.service - OpenSSH per-connection server daemon (10.0.0.1:36842). Feb 13 20:55:27.510757 sshd[3583]: Accepted publickey for core from 10.0.0.1 port 36842 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:27.511964 sshd[3583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:27.515492 systemd-logind[1418]: New session 58 of user core. Feb 13 20:55:27.525232 systemd[1]: Started session-58.scope - Session 58 of User core. Feb 13 20:55:27.575918 kubelet[2422]: E0213 20:55:27.575832 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:27.632619 sshd[3583]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:27.635806 systemd[1]: sshd@57-10.0.0.9:22-10.0.0.1:36842.service: Deactivated successfully. Feb 13 20:55:27.637435 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 20:55:27.638567 systemd-logind[1418]: Session 58 logged out. Waiting for processes to exit. Feb 13 20:55:27.639825 systemd-logind[1418]: Removed session 58. Feb 13 20:55:32.577372 kubelet[2422]: E0213 20:55:32.577333 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:32.645510 systemd[1]: Started sshd@58-10.0.0.9:22-10.0.0.1:34906.service - OpenSSH per-connection server daemon (10.0.0.1:34906). Feb 13 20:55:32.685767 sshd[3597]: Accepted publickey for core from 10.0.0.1 port 34906 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:32.687008 sshd[3597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:32.690705 systemd-logind[1418]: New session 59 of user core. Feb 13 20:55:32.700217 systemd[1]: Started session-59.scope - Session 59 of User core. Feb 13 20:55:32.809517 sshd[3597]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:32.813469 systemd[1]: sshd@58-10.0.0.9:22-10.0.0.1:34906.service: Deactivated successfully. Feb 13 20:55:32.815298 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 20:55:32.815961 systemd-logind[1418]: Session 59 logged out. Waiting for processes to exit. Feb 13 20:55:32.816838 systemd-logind[1418]: Removed session 59. Feb 13 20:55:34.471458 kubelet[2422]: E0213 20:55:34.471370 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:34.472426 kubelet[2422]: E0213 20:55:34.472301 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:55:37.579663 kubelet[2422]: E0213 20:55:37.579623 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:37.820610 systemd[1]: Started sshd@59-10.0.0.9:22-10.0.0.1:34920.service - OpenSSH per-connection server daemon (10.0.0.1:34920). Feb 13 20:55:37.858559 sshd[3611]: Accepted publickey for core from 10.0.0.1 port 34920 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:37.859669 sshd[3611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:37.863653 systemd-logind[1418]: New session 60 of user core. Feb 13 20:55:37.870228 systemd[1]: Started session-60.scope - Session 60 of User core. Feb 13 20:55:37.976314 sshd[3611]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:37.979457 systemd[1]: sshd@59-10.0.0.9:22-10.0.0.1:34920.service: Deactivated successfully. Feb 13 20:55:37.981051 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 20:55:37.982452 systemd-logind[1418]: Session 60 logged out. Waiting for processes to exit. Feb 13 20:55:37.983312 systemd-logind[1418]: Removed session 60. Feb 13 20:55:40.530913 update_engine[1426]: I20250213 20:55:40.530834 1426 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 20:55:40.530913 update_engine[1426]: I20250213 20:55:40.530897 1426 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 20:55:40.531335 update_engine[1426]: I20250213 20:55:40.531193 1426 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 20:55:40.531648 update_engine[1426]: I20250213 20:55:40.531534 1426 omaha_request_params.cc:62] Current group set to lts Feb 13 20:55:40.531648 update_engine[1426]: I20250213 20:55:40.531622 1426 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 20:55:40.531648 update_engine[1426]: I20250213 20:55:40.531630 1426 update_attempter.cc:643] Scheduling an action processor start. Feb 13 20:55:40.531648 update_engine[1426]: I20250213 20:55:40.531646 1426 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:55:40.531762 update_engine[1426]: I20250213 20:55:40.531672 1426 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 20:55:40.531762 update_engine[1426]: I20250213 20:55:40.531713 1426 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:55:40.531762 update_engine[1426]: I20250213 20:55:40.531720 1426 omaha_request_action.cc:272] Request: Feb 13 20:55:40.531762 update_engine[1426]: Feb 13 20:55:40.531762 update_engine[1426]: Feb 13 20:55:40.531762 update_engine[1426]: Feb 13 20:55:40.531762 update_engine[1426]: Feb 13 20:55:40.531762 update_engine[1426]: Feb 13 20:55:40.531762 update_engine[1426]: Feb 13 20:55:40.531762 update_engine[1426]: Feb 13 20:55:40.531762 update_engine[1426]: Feb 13 20:55:40.531762 update_engine[1426]: I20250213 20:55:40.531726 1426 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:55:40.532000 locksmithd[1463]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 20:55:40.532767 update_engine[1426]: I20250213 20:55:40.532728 1426 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:55:40.532985 update_engine[1426]: I20250213 20:55:40.532951 1426 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:55:40.541690 update_engine[1426]: E20250213 20:55:40.541644 1426 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:55:40.541756 update_engine[1426]: I20250213 20:55:40.541710 1426 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 20:55:42.581244 kubelet[2422]: E0213 20:55:42.581205 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:42.986641 systemd[1]: Started sshd@60-10.0.0.9:22-10.0.0.1:53336.service - OpenSSH per-connection server daemon (10.0.0.1:53336). Feb 13 20:55:43.024479 sshd[3628]: Accepted publickey for core from 10.0.0.1 port 53336 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:43.026010 sshd[3628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:43.029500 systemd-logind[1418]: New session 61 of user core. Feb 13 20:55:43.040215 systemd[1]: Started session-61.scope - Session 61 of User core. Feb 13 20:55:43.146735 sshd[3628]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:43.150347 systemd[1]: sshd@60-10.0.0.9:22-10.0.0.1:53336.service: Deactivated successfully. Feb 13 20:55:43.152001 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 20:55:43.153580 systemd-logind[1418]: Session 61 logged out. Waiting for processes to exit. Feb 13 20:55:43.154504 systemd-logind[1418]: Removed session 61. Feb 13 20:55:47.582417 kubelet[2422]: E0213 20:55:47.582328 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:48.160597 systemd[1]: Started sshd@61-10.0.0.9:22-10.0.0.1:53342.service - OpenSSH per-connection server daemon (10.0.0.1:53342). Feb 13 20:55:48.198607 sshd[3643]: Accepted publickey for core from 10.0.0.1 port 53342 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:48.199746 sshd[3643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:48.203142 systemd-logind[1418]: New session 62 of user core. Feb 13 20:55:48.211224 systemd[1]: Started session-62.scope - Session 62 of User core. Feb 13 20:55:48.316635 sshd[3643]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:48.319722 systemd[1]: sshd@61-10.0.0.9:22-10.0.0.1:53342.service: Deactivated successfully. Feb 13 20:55:48.321317 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 20:55:48.321870 systemd-logind[1418]: Session 62 logged out. Waiting for processes to exit. Feb 13 20:55:48.322680 systemd-logind[1418]: Removed session 62. Feb 13 20:55:48.469300 kubelet[2422]: E0213 20:55:48.469170 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:48.469930 kubelet[2422]: E0213 20:55:48.469876 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:55:50.469537 kubelet[2422]: E0213 20:55:50.469456 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:50.530799 update_engine[1426]: I20250213 20:55:50.530720 1426 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:55:50.531178 update_engine[1426]: I20250213 20:55:50.530974 1426 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:55:50.531178 update_engine[1426]: I20250213 20:55:50.531158 1426 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:55:50.544694 update_engine[1426]: E20250213 20:55:50.544651 1426 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:55:50.544755 update_engine[1426]: I20250213 20:55:50.544709 1426 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 20:55:52.583176 kubelet[2422]: E0213 20:55:52.583125 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:53.327544 systemd[1]: Started sshd@62-10.0.0.9:22-10.0.0.1:40896.service - OpenSSH per-connection server daemon (10.0.0.1:40896). Feb 13 20:55:53.365372 sshd[3660]: Accepted publickey for core from 10.0.0.1 port 40896 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:53.366700 sshd[3660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:53.370886 systemd-logind[1418]: New session 63 of user core. Feb 13 20:55:53.382229 systemd[1]: Started session-63.scope - Session 63 of User core. Feb 13 20:55:53.489031 sshd[3660]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:53.491705 systemd[1]: sshd@62-10.0.0.9:22-10.0.0.1:40896.service: Deactivated successfully. Feb 13 20:55:53.494624 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 20:55:53.495870 systemd-logind[1418]: Session 63 logged out. Waiting for processes to exit. Feb 13 20:55:53.496865 systemd-logind[1418]: Removed session 63. Feb 13 20:55:55.469919 kubelet[2422]: E0213 20:55:55.469833 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:55:57.584278 kubelet[2422]: E0213 20:55:57.584239 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:55:58.499652 systemd[1]: Started sshd@63-10.0.0.9:22-10.0.0.1:40906.service - OpenSSH per-connection server daemon (10.0.0.1:40906). Feb 13 20:55:58.537609 sshd[3677]: Accepted publickey for core from 10.0.0.1 port 40906 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:55:58.538790 sshd[3677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:55:58.542622 systemd-logind[1418]: New session 64 of user core. Feb 13 20:55:58.549216 systemd[1]: Started session-64.scope - Session 64 of User core. Feb 13 20:55:58.657032 sshd[3677]: pam_unix(sshd:session): session closed for user core Feb 13 20:55:58.660270 systemd[1]: sshd@63-10.0.0.9:22-10.0.0.1:40906.service: Deactivated successfully. Feb 13 20:55:58.662864 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 20:55:58.664887 systemd-logind[1418]: Session 64 logged out. Waiting for processes to exit. Feb 13 20:55:58.665779 systemd-logind[1418]: Removed session 64. Feb 13 20:56:00.530684 update_engine[1426]: I20250213 20:56:00.530171 1426 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:56:00.530684 update_engine[1426]: I20250213 20:56:00.530475 1426 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:56:00.530684 update_engine[1426]: I20250213 20:56:00.530639 1426 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:56:00.534927 update_engine[1426]: E20250213 20:56:00.534853 1426 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:56:00.534927 update_engine[1426]: I20250213 20:56:00.534907 1426 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 20:56:01.469659 kubelet[2422]: E0213 20:56:01.469622 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:01.471023 kubelet[2422]: E0213 20:56:01.470916 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:56:02.585640 kubelet[2422]: E0213 20:56:02.585597 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:03.671678 systemd[1]: Started sshd@64-10.0.0.9:22-10.0.0.1:36120.service - OpenSSH per-connection server daemon (10.0.0.1:36120). Feb 13 20:56:03.710363 sshd[3692]: Accepted publickey for core from 10.0.0.1 port 36120 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:03.711496 sshd[3692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:03.714734 systemd-logind[1418]: New session 65 of user core. Feb 13 20:56:03.723220 systemd[1]: Started session-65.scope - Session 65 of User core. Feb 13 20:56:03.828523 sshd[3692]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:03.831666 systemd[1]: sshd@64-10.0.0.9:22-10.0.0.1:36120.service: Deactivated successfully. Feb 13 20:56:03.833905 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 20:56:03.834892 systemd-logind[1418]: Session 65 logged out. Waiting for processes to exit. Feb 13 20:56:03.835971 systemd-logind[1418]: Removed session 65. Feb 13 20:56:07.586694 kubelet[2422]: E0213 20:56:07.586593 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:08.838572 systemd[1]: Started sshd@65-10.0.0.9:22-10.0.0.1:36126.service - OpenSSH per-connection server daemon (10.0.0.1:36126). Feb 13 20:56:08.876438 sshd[3706]: Accepted publickey for core from 10.0.0.1 port 36126 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:08.877646 sshd[3706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:08.880990 systemd-logind[1418]: New session 66 of user core. Feb 13 20:56:08.887227 systemd[1]: Started session-66.scope - Session 66 of User core. Feb 13 20:56:08.995985 sshd[3706]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:08.999315 systemd[1]: sshd@65-10.0.0.9:22-10.0.0.1:36126.service: Deactivated successfully. Feb 13 20:56:09.001963 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 20:56:09.002911 systemd-logind[1418]: Session 66 logged out. Waiting for processes to exit. Feb 13 20:56:09.003736 systemd-logind[1418]: Removed session 66. Feb 13 20:56:10.530650 update_engine[1426]: I20250213 20:56:10.530560 1426 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:56:10.531033 update_engine[1426]: I20250213 20:56:10.530897 1426 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:56:10.531076 update_engine[1426]: I20250213 20:56:10.531050 1426 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:56:10.536202 update_engine[1426]: E20250213 20:56:10.536161 1426 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:56:10.536267 update_engine[1426]: I20250213 20:56:10.536214 1426 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:56:10.536267 update_engine[1426]: I20250213 20:56:10.536224 1426 omaha_request_action.cc:617] Omaha request response: Feb 13 20:56:10.536309 update_engine[1426]: E20250213 20:56:10.536291 1426 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 20:56:10.536330 update_engine[1426]: I20250213 20:56:10.536306 1426 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 20:56:10.536330 update_engine[1426]: I20250213 20:56:10.536311 1426 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:56:10.536330 update_engine[1426]: I20250213 20:56:10.536316 1426 update_attempter.cc:306] Processing Done. Feb 13 20:56:10.536435 update_engine[1426]: E20250213 20:56:10.536329 1426 update_attempter.cc:619] Update failed. Feb 13 20:56:10.536435 update_engine[1426]: I20250213 20:56:10.536335 1426 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 20:56:10.536435 update_engine[1426]: I20250213 20:56:10.536339 1426 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 20:56:10.536435 update_engine[1426]: I20250213 20:56:10.536345 1426 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 20:56:10.536435 update_engine[1426]: I20250213 20:56:10.536408 1426 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:56:10.536435 update_engine[1426]: I20250213 20:56:10.536428 1426 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:56:10.536435 update_engine[1426]: I20250213 20:56:10.536434 1426 omaha_request_action.cc:272] Request: Feb 13 20:56:10.536435 update_engine[1426]: Feb 13 20:56:10.536435 update_engine[1426]: Feb 13 20:56:10.536435 update_engine[1426]: Feb 13 20:56:10.536435 update_engine[1426]: Feb 13 20:56:10.536435 update_engine[1426]: Feb 13 20:56:10.536435 update_engine[1426]: Feb 13 20:56:10.536665 update_engine[1426]: I20250213 20:56:10.536439 1426 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:56:10.536665 update_engine[1426]: I20250213 20:56:10.536572 1426 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:56:10.536758 update_engine[1426]: I20250213 20:56:10.536686 1426 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:56:10.537012 locksmithd[1463]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 20:56:10.540143 update_engine[1426]: E20250213 20:56:10.540102 1426 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:56:10.540200 update_engine[1426]: I20250213 20:56:10.540155 1426 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:56:10.540200 update_engine[1426]: I20250213 20:56:10.540164 1426 omaha_request_action.cc:617] Omaha request response: Feb 13 20:56:10.540200 update_engine[1426]: I20250213 20:56:10.540179 1426 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:56:10.540200 update_engine[1426]: I20250213 20:56:10.540184 1426 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:56:10.540200 update_engine[1426]: I20250213 20:56:10.540189 1426 update_attempter.cc:306] Processing Done. Feb 13 20:56:10.540200 update_engine[1426]: I20250213 20:56:10.540194 1426 update_attempter.cc:310] Error event sent. Feb 13 20:56:10.540319 update_engine[1426]: I20250213 20:56:10.540201 1426 update_check_scheduler.cc:74] Next update check in 45m59s Feb 13 20:56:10.540431 locksmithd[1463]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 20:56:11.470113 kubelet[2422]: E0213 20:56:11.470058 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:12.470229 kubelet[2422]: E0213 20:56:12.470198 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:12.471328 kubelet[2422]: E0213 20:56:12.471235 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:56:12.587158 kubelet[2422]: E0213 20:56:12.587117 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:14.006759 systemd[1]: Started sshd@66-10.0.0.9:22-10.0.0.1:53682.service - OpenSSH per-connection server daemon (10.0.0.1:53682). Feb 13 20:56:14.044319 sshd[3720]: Accepted publickey for core from 10.0.0.1 port 53682 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:14.045451 sshd[3720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:14.049534 systemd-logind[1418]: New session 67 of user core. Feb 13 20:56:14.062217 systemd[1]: Started session-67.scope - Session 67 of User core. Feb 13 20:56:14.167617 sshd[3720]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:14.170700 systemd[1]: sshd@66-10.0.0.9:22-10.0.0.1:53682.service: Deactivated successfully. Feb 13 20:56:14.172305 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 20:56:14.173461 systemd-logind[1418]: Session 67 logged out. Waiting for processes to exit. Feb 13 20:56:14.174263 systemd-logind[1418]: Removed session 67. Feb 13 20:56:17.588789 kubelet[2422]: E0213 20:56:17.588742 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:19.181672 systemd[1]: Started sshd@67-10.0.0.9:22-10.0.0.1:53684.service - OpenSSH per-connection server daemon (10.0.0.1:53684). Feb 13 20:56:19.218908 sshd[3735]: Accepted publickey for core from 10.0.0.1 port 53684 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:19.220124 sshd[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:19.223396 systemd-logind[1418]: New session 68 of user core. Feb 13 20:56:19.233296 systemd[1]: Started session-68.scope - Session 68 of User core. Feb 13 20:56:19.341383 sshd[3735]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:19.345040 systemd[1]: sshd@67-10.0.0.9:22-10.0.0.1:53684.service: Deactivated successfully. Feb 13 20:56:19.346726 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 20:56:19.347336 systemd-logind[1418]: Session 68 logged out. Waiting for processes to exit. Feb 13 20:56:19.348125 systemd-logind[1418]: Removed session 68. Feb 13 20:56:22.589476 kubelet[2422]: E0213 20:56:22.589414 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:24.355569 systemd[1]: Started sshd@68-10.0.0.9:22-10.0.0.1:57530.service - OpenSSH per-connection server daemon (10.0.0.1:57530). Feb 13 20:56:24.393001 sshd[3753]: Accepted publickey for core from 10.0.0.1 port 57530 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:24.394170 sshd[3753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:24.397946 systemd-logind[1418]: New session 69 of user core. Feb 13 20:56:24.411275 systemd[1]: Started session-69.scope - Session 69 of User core. Feb 13 20:56:24.518902 sshd[3753]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:24.521980 systemd[1]: sshd@68-10.0.0.9:22-10.0.0.1:57530.service: Deactivated successfully. Feb 13 20:56:24.523699 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 20:56:24.524359 systemd-logind[1418]: Session 69 logged out. Waiting for processes to exit. Feb 13 20:56:24.525353 systemd-logind[1418]: Removed session 69. Feb 13 20:56:27.469515 kubelet[2422]: E0213 20:56:27.469474 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:27.470459 containerd[1434]: time="2025-02-13T20:56:27.470407040Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:56:27.590413 kubelet[2422]: E0213 20:56:27.590351 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:28.830601 containerd[1434]: time="2025-02-13T20:56:28.830506138Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:56:28.830601 containerd[1434]: time="2025-02-13T20:56:28.830577657Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=13143" Feb 13 20:56:28.830994 kubelet[2422]: E0213 20:56:28.830727 2422 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:56:28.830994 kubelet[2422]: E0213 20:56:28.830775 2422 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:56:28.831283 kubelet[2422]: E0213 20:56:28.830866 2422 kuberuntime_manager.go:1341] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w58p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-chdvq_kube-flannel(21e6622a-36e3-47a8-b025-f56eaad98d84): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:56:28.832048 kubelet[2422]: E0213 20:56:28.832007 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:56:29.529663 systemd[1]: Started sshd@69-10.0.0.9:22-10.0.0.1:57540.service - OpenSSH per-connection server daemon (10.0.0.1:57540). Feb 13 20:56:29.567289 sshd[3767]: Accepted publickey for core from 10.0.0.1 port 57540 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:29.568524 sshd[3767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:29.572341 systemd-logind[1418]: New session 70 of user core. Feb 13 20:56:29.593242 systemd[1]: Started session-70.scope - Session 70 of User core. Feb 13 20:56:29.699620 sshd[3767]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:29.702773 systemd[1]: sshd@69-10.0.0.9:22-10.0.0.1:57540.service: Deactivated successfully. Feb 13 20:56:29.704701 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 20:56:29.705465 systemd-logind[1418]: Session 70 logged out. Waiting for processes to exit. Feb 13 20:56:29.706333 systemd-logind[1418]: Removed session 70. Feb 13 20:56:32.590878 kubelet[2422]: E0213 20:56:32.590835 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:34.713615 systemd[1]: Started sshd@70-10.0.0.9:22-10.0.0.1:37734.service - OpenSSH per-connection server daemon (10.0.0.1:37734). Feb 13 20:56:34.751017 sshd[3781]: Accepted publickey for core from 10.0.0.1 port 37734 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:34.752274 sshd[3781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:34.757325 systemd-logind[1418]: New session 71 of user core. Feb 13 20:56:34.772220 systemd[1]: Started session-71.scope - Session 71 of User core. Feb 13 20:56:34.878297 sshd[3781]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:34.881288 systemd[1]: sshd@70-10.0.0.9:22-10.0.0.1:37734.service: Deactivated successfully. Feb 13 20:56:34.883318 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 20:56:34.883890 systemd-logind[1418]: Session 71 logged out. Waiting for processes to exit. Feb 13 20:56:34.884659 systemd-logind[1418]: Removed session 71. Feb 13 20:56:36.469646 kubelet[2422]: E0213 20:56:36.469552 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:37.592281 kubelet[2422]: E0213 20:56:37.592240 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:39.469591 kubelet[2422]: E0213 20:56:39.469530 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:39.470601 kubelet[2422]: E0213 20:56:39.470570 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:56:39.890570 systemd[1]: Started sshd@71-10.0.0.9:22-10.0.0.1:37742.service - OpenSSH per-connection server daemon (10.0.0.1:37742). Feb 13 20:56:39.928852 sshd[3795]: Accepted publickey for core from 10.0.0.1 port 37742 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:39.930329 sshd[3795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:39.933731 systemd-logind[1418]: New session 72 of user core. Feb 13 20:56:39.948224 systemd[1]: Started session-72.scope - Session 72 of User core. Feb 13 20:56:40.054254 sshd[3795]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:40.057047 systemd-logind[1418]: Session 72 logged out. Waiting for processes to exit. Feb 13 20:56:40.057301 systemd[1]: sshd@71-10.0.0.9:22-10.0.0.1:37742.service: Deactivated successfully. Feb 13 20:56:40.058825 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 20:56:40.060547 systemd-logind[1418]: Removed session 72. Feb 13 20:56:42.593229 kubelet[2422]: E0213 20:56:42.593200 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:45.070823 systemd[1]: Started sshd@72-10.0.0.9:22-10.0.0.1:35242.service - OpenSSH per-connection server daemon (10.0.0.1:35242). Feb 13 20:56:45.108220 sshd[3811]: Accepted publickey for core from 10.0.0.1 port 35242 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:45.109392 sshd[3811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:45.113059 systemd-logind[1418]: New session 73 of user core. Feb 13 20:56:45.125278 systemd[1]: Started session-73.scope - Session 73 of User core. Feb 13 20:56:45.229313 sshd[3811]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:45.232343 systemd[1]: sshd@72-10.0.0.9:22-10.0.0.1:35242.service: Deactivated successfully. Feb 13 20:56:45.233908 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 20:56:45.235171 systemd-logind[1418]: Session 73 logged out. Waiting for processes to exit. Feb 13 20:56:45.236074 systemd-logind[1418]: Removed session 73. Feb 13 20:56:47.594572 kubelet[2422]: E0213 20:56:47.594515 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:50.239655 systemd[1]: Started sshd@73-10.0.0.9:22-10.0.0.1:35246.service - OpenSSH per-connection server daemon (10.0.0.1:35246). Feb 13 20:56:50.278477 sshd[3828]: Accepted publickey for core from 10.0.0.1 port 35246 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:50.279663 sshd[3828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:50.283139 systemd-logind[1418]: New session 74 of user core. Feb 13 20:56:50.293264 systemd[1]: Started session-74.scope - Session 74 of User core. Feb 13 20:56:50.400976 sshd[3828]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:50.404215 systemd[1]: sshd@73-10.0.0.9:22-10.0.0.1:35246.service: Deactivated successfully. Feb 13 20:56:50.405782 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 20:56:50.407158 systemd-logind[1418]: Session 74 logged out. Waiting for processes to exit. Feb 13 20:56:50.407990 systemd-logind[1418]: Removed session 74. Feb 13 20:56:52.469760 kubelet[2422]: E0213 20:56:52.469625 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:52.471266 kubelet[2422]: E0213 20:56:52.470788 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:56:52.596044 kubelet[2422]: E0213 20:56:52.596018 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:56:55.411614 systemd[1]: Started sshd@74-10.0.0.9:22-10.0.0.1:38300.service - OpenSSH per-connection server daemon (10.0.0.1:38300). Feb 13 20:56:55.448970 sshd[3842]: Accepted publickey for core from 10.0.0.1 port 38300 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:56:55.450134 sshd[3842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:56:55.453487 systemd-logind[1418]: New session 75 of user core. Feb 13 20:56:55.462219 systemd[1]: Started session-75.scope - Session 75 of User core. Feb 13 20:56:55.568206 sshd[3842]: pam_unix(sshd:session): session closed for user core Feb 13 20:56:55.571352 systemd[1]: sshd@74-10.0.0.9:22-10.0.0.1:38300.service: Deactivated successfully. Feb 13 20:56:55.573826 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 20:56:55.574548 systemd-logind[1418]: Session 75 logged out. Waiting for processes to exit. Feb 13 20:56:55.575307 systemd-logind[1418]: Removed session 75. Feb 13 20:56:56.470545 kubelet[2422]: E0213 20:56:56.470501 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:56:57.597343 kubelet[2422]: E0213 20:56:57.597299 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:00.579140 systemd[1]: Started sshd@75-10.0.0.9:22-10.0.0.1:38312.service - OpenSSH per-connection server daemon (10.0.0.1:38312). Feb 13 20:57:00.616557 sshd[3856]: Accepted publickey for core from 10.0.0.1 port 38312 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:00.617771 sshd[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:00.621183 systemd-logind[1418]: New session 76 of user core. Feb 13 20:57:00.627229 systemd[1]: Started session-76.scope - Session 76 of User core. Feb 13 20:57:00.734547 sshd[3856]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:00.738074 systemd[1]: sshd@75-10.0.0.9:22-10.0.0.1:38312.service: Deactivated successfully. Feb 13 20:57:00.739884 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 20:57:00.740720 systemd-logind[1418]: Session 76 logged out. Waiting for processes to exit. Feb 13 20:57:00.741505 systemd-logind[1418]: Removed session 76. Feb 13 20:57:02.598324 kubelet[2422]: E0213 20:57:02.598286 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:03.469443 kubelet[2422]: E0213 20:57:03.469410 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:03.470217 kubelet[2422]: E0213 20:57:03.470165 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:57:05.745701 systemd[1]: Started sshd@76-10.0.0.9:22-10.0.0.1:56856.service - OpenSSH per-connection server daemon (10.0.0.1:56856). Feb 13 20:57:05.783077 sshd[3871]: Accepted publickey for core from 10.0.0.1 port 56856 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:05.784230 sshd[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:05.788114 systemd-logind[1418]: New session 77 of user core. Feb 13 20:57:05.799264 systemd[1]: Started session-77.scope - Session 77 of User core. Feb 13 20:57:05.907485 sshd[3871]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:05.910631 systemd[1]: sshd@76-10.0.0.9:22-10.0.0.1:56856.service: Deactivated successfully. Feb 13 20:57:05.913626 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 20:57:05.914447 systemd-logind[1418]: Session 77 logged out. Waiting for processes to exit. Feb 13 20:57:05.915249 systemd-logind[1418]: Removed session 77. Feb 13 20:57:07.599782 kubelet[2422]: E0213 20:57:07.599730 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:10.918645 systemd[1]: Started sshd@77-10.0.0.9:22-10.0.0.1:56866.service - OpenSSH per-connection server daemon (10.0.0.1:56866). Feb 13 20:57:10.956823 sshd[3886]: Accepted publickey for core from 10.0.0.1 port 56866 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:10.958036 sshd[3886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:10.961571 systemd-logind[1418]: New session 78 of user core. Feb 13 20:57:10.968255 systemd[1]: Started session-78.scope - Session 78 of User core. Feb 13 20:57:11.080278 sshd[3886]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:11.091712 systemd[1]: sshd@77-10.0.0.9:22-10.0.0.1:56866.service: Deactivated successfully. Feb 13 20:57:11.093296 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 20:57:11.095121 systemd-logind[1418]: Session 78 logged out. Waiting for processes to exit. Feb 13 20:57:11.100336 systemd[1]: Started sshd@78-10.0.0.9:22-10.0.0.1:56868.service - OpenSSH per-connection server daemon (10.0.0.1:56868). Feb 13 20:57:11.101304 systemd-logind[1418]: Removed session 78. Feb 13 20:57:11.134872 sshd[3901]: Accepted publickey for core from 10.0.0.1 port 56868 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:11.136317 sshd[3901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:11.139943 systemd-logind[1418]: New session 79 of user core. Feb 13 20:57:11.155277 systemd[1]: Started session-79.scope - Session 79 of User core. Feb 13 20:57:11.318940 sshd[3901]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:11.329652 systemd[1]: sshd@78-10.0.0.9:22-10.0.0.1:56868.service: Deactivated successfully. Feb 13 20:57:11.331180 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 20:57:11.333051 systemd-logind[1418]: Session 79 logged out. Waiting for processes to exit. Feb 13 20:57:11.340407 systemd[1]: Started sshd@79-10.0.0.9:22-10.0.0.1:56874.service - OpenSSH per-connection server daemon (10.0.0.1:56874). Feb 13 20:57:11.341523 systemd-logind[1418]: Removed session 79. Feb 13 20:57:11.376156 sshd[3914]: Accepted publickey for core from 10.0.0.1 port 56874 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:11.377512 sshd[3914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:11.381642 systemd-logind[1418]: New session 80 of user core. Feb 13 20:57:11.394286 systemd[1]: Started session-80.scope - Session 80 of User core. Feb 13 20:57:12.076002 sshd[3914]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:12.084993 systemd[1]: sshd@79-10.0.0.9:22-10.0.0.1:56874.service: Deactivated successfully. Feb 13 20:57:12.087189 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 20:57:12.088521 systemd-logind[1418]: Session 80 logged out. Waiting for processes to exit. Feb 13 20:57:12.093376 systemd[1]: Started sshd@80-10.0.0.9:22-10.0.0.1:56886.service - OpenSSH per-connection server daemon (10.0.0.1:56886). Feb 13 20:57:12.094529 systemd-logind[1418]: Removed session 80. Feb 13 20:57:12.135263 sshd[3935]: Accepted publickey for core from 10.0.0.1 port 56886 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:12.136482 sshd[3935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:12.140064 systemd-logind[1418]: New session 81 of user core. Feb 13 20:57:12.152243 systemd[1]: Started session-81.scope - Session 81 of User core. Feb 13 20:57:12.359667 sshd[3935]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:12.367199 systemd[1]: sshd@80-10.0.0.9:22-10.0.0.1:56886.service: Deactivated successfully. Feb 13 20:57:12.369337 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 20:57:12.370781 systemd-logind[1418]: Session 81 logged out. Waiting for processes to exit. Feb 13 20:57:12.378347 systemd[1]: Started sshd@81-10.0.0.9:22-10.0.0.1:56892.service - OpenSSH per-connection server daemon (10.0.0.1:56892). Feb 13 20:57:12.379466 systemd-logind[1418]: Removed session 81. Feb 13 20:57:12.413055 sshd[3947]: Accepted publickey for core from 10.0.0.1 port 56892 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:12.414396 sshd[3947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:12.417898 systemd-logind[1418]: New session 82 of user core. Feb 13 20:57:12.424228 systemd[1]: Started session-82.scope - Session 82 of User core. Feb 13 20:57:12.529221 sshd[3947]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:12.536454 systemd[1]: sshd@81-10.0.0.9:22-10.0.0.1:56892.service: Deactivated successfully. Feb 13 20:57:12.538074 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 20:57:12.539240 systemd-logind[1418]: Session 82 logged out. Waiting for processes to exit. Feb 13 20:57:12.539986 systemd-logind[1418]: Removed session 82. Feb 13 20:57:12.600974 kubelet[2422]: E0213 20:57:12.600919 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:16.469361 kubelet[2422]: E0213 20:57:16.469313 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:16.471146 kubelet[2422]: E0213 20:57:16.471107 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:57:17.470038 kubelet[2422]: E0213 20:57:17.469997 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:17.540709 systemd[1]: Started sshd@82-10.0.0.9:22-10.0.0.1:51174.service - OpenSSH per-connection server daemon (10.0.0.1:51174). Feb 13 20:57:17.578588 sshd[3962]: Accepted publickey for core from 10.0.0.1 port 51174 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:17.579785 sshd[3962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:17.583422 systemd-logind[1418]: New session 83 of user core. Feb 13 20:57:17.594261 systemd[1]: Started session-83.scope - Session 83 of User core. Feb 13 20:57:17.602570 kubelet[2422]: E0213 20:57:17.602529 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:17.702298 sshd[3962]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:17.705570 systemd[1]: sshd@82-10.0.0.9:22-10.0.0.1:51174.service: Deactivated successfully. Feb 13 20:57:17.709354 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 20:57:17.709954 systemd-logind[1418]: Session 83 logged out. Waiting for processes to exit. Feb 13 20:57:17.710918 systemd-logind[1418]: Removed session 83. Feb 13 20:57:22.603929 kubelet[2422]: E0213 20:57:22.603885 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:22.712772 systemd[1]: Started sshd@83-10.0.0.9:22-10.0.0.1:40792.service - OpenSSH per-connection server daemon (10.0.0.1:40792). Feb 13 20:57:22.750786 sshd[3979]: Accepted publickey for core from 10.0.0.1 port 40792 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:22.751991 sshd[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:22.755995 systemd-logind[1418]: New session 84 of user core. Feb 13 20:57:22.766265 systemd[1]: Started session-84.scope - Session 84 of User core. Feb 13 20:57:22.869560 sshd[3979]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:22.872770 systemd[1]: sshd@83-10.0.0.9:22-10.0.0.1:40792.service: Deactivated successfully. Feb 13 20:57:22.875222 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 20:57:22.876007 systemd-logind[1418]: Session 84 logged out. Waiting for processes to exit. Feb 13 20:57:22.876865 systemd-logind[1418]: Removed session 84. Feb 13 20:57:27.604731 kubelet[2422]: E0213 20:57:27.604678 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:27.880708 systemd[1]: Started sshd@84-10.0.0.9:22-10.0.0.1:40794.service - OpenSSH per-connection server daemon (10.0.0.1:40794). Feb 13 20:57:27.918592 sshd[3993]: Accepted publickey for core from 10.0.0.1 port 40794 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:27.919807 sshd[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:27.923729 systemd-logind[1418]: New session 85 of user core. Feb 13 20:57:27.935250 systemd[1]: Started session-85.scope - Session 85 of User core. Feb 13 20:57:28.041377 sshd[3993]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:28.044352 systemd[1]: sshd@84-10.0.0.9:22-10.0.0.1:40794.service: Deactivated successfully. Feb 13 20:57:28.046048 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 20:57:28.047985 systemd-logind[1418]: Session 85 logged out. Waiting for processes to exit. Feb 13 20:57:28.049844 systemd-logind[1418]: Removed session 85. Feb 13 20:57:28.470436 kubelet[2422]: E0213 20:57:28.470402 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:28.471069 kubelet[2422]: E0213 20:57:28.470969 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:57:29.469619 kubelet[2422]: E0213 20:57:29.469585 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:32.605922 kubelet[2422]: E0213 20:57:32.605874 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:33.054621 systemd[1]: Started sshd@85-10.0.0.9:22-10.0.0.1:35878.service - OpenSSH per-connection server daemon (10.0.0.1:35878). Feb 13 20:57:33.092131 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 35878 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:33.093301 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:33.097242 systemd-logind[1418]: New session 86 of user core. Feb 13 20:57:33.104242 systemd[1]: Started session-86.scope - Session 86 of User core. Feb 13 20:57:33.208625 sshd[4008]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:33.212044 systemd[1]: sshd@85-10.0.0.9:22-10.0.0.1:35878.service: Deactivated successfully. Feb 13 20:57:33.214274 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 20:57:33.215020 systemd-logind[1418]: Session 86 logged out. Waiting for processes to exit. Feb 13 20:57:33.215801 systemd-logind[1418]: Removed session 86. Feb 13 20:57:37.607131 kubelet[2422]: E0213 20:57:37.607011 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:38.223722 systemd[1]: Started sshd@86-10.0.0.9:22-10.0.0.1:35884.service - OpenSSH per-connection server daemon (10.0.0.1:35884). Feb 13 20:57:38.261468 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 35884 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:38.262736 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:38.266662 systemd-logind[1418]: New session 87 of user core. Feb 13 20:57:38.277281 systemd[1]: Started session-87.scope - Session 87 of User core. Feb 13 20:57:38.381669 sshd[4023]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:38.385076 systemd[1]: sshd@86-10.0.0.9:22-10.0.0.1:35884.service: Deactivated successfully. Feb 13 20:57:38.386737 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 20:57:38.387443 systemd-logind[1418]: Session 87 logged out. Waiting for processes to exit. Feb 13 20:57:38.389026 systemd-logind[1418]: Removed session 87. Feb 13 20:57:42.607897 kubelet[2422]: E0213 20:57:42.607823 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:43.396560 systemd[1]: Started sshd@87-10.0.0.9:22-10.0.0.1:57264.service - OpenSSH per-connection server daemon (10.0.0.1:57264). Feb 13 20:57:43.433845 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 57264 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:43.435026 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:43.438866 systemd-logind[1418]: New session 88 of user core. Feb 13 20:57:43.445226 systemd[1]: Started session-88.scope - Session 88 of User core. Feb 13 20:57:43.469541 kubelet[2422]: E0213 20:57:43.469509 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:43.470198 kubelet[2422]: E0213 20:57:43.470169 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:57:43.546581 sshd[4040]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:43.549588 systemd[1]: sshd@87-10.0.0.9:22-10.0.0.1:57264.service: Deactivated successfully. Feb 13 20:57:43.551124 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 20:57:43.551686 systemd-logind[1418]: Session 88 logged out. Waiting for processes to exit. Feb 13 20:57:43.552474 systemd-logind[1418]: Removed session 88. Feb 13 20:57:47.609265 kubelet[2422]: E0213 20:57:47.609155 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:48.560703 systemd[1]: Started sshd@88-10.0.0.9:22-10.0.0.1:57270.service - OpenSSH per-connection server daemon (10.0.0.1:57270). Feb 13 20:57:48.598399 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 57270 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:48.599686 sshd[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:48.603710 systemd-logind[1418]: New session 89 of user core. Feb 13 20:57:48.615300 systemd[1]: Started session-89.scope - Session 89 of User core. Feb 13 20:57:48.720673 sshd[4055]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:48.723806 systemd[1]: sshd@88-10.0.0.9:22-10.0.0.1:57270.service: Deactivated successfully. Feb 13 20:57:48.725961 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 20:57:48.726516 systemd-logind[1418]: Session 89 logged out. Waiting for processes to exit. Feb 13 20:57:48.727321 systemd-logind[1418]: Removed session 89. Feb 13 20:57:52.610533 kubelet[2422]: E0213 20:57:52.610489 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:53.731626 systemd[1]: Started sshd@89-10.0.0.9:22-10.0.0.1:46186.service - OpenSSH per-connection server daemon (10.0.0.1:46186). Feb 13 20:57:53.769449 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 46186 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:53.770757 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:53.774332 systemd-logind[1418]: New session 90 of user core. Feb 13 20:57:53.786223 systemd[1]: Started session-90.scope - Session 90 of User core. Feb 13 20:57:53.888153 sshd[4072]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:53.891567 systemd[1]: sshd@89-10.0.0.9:22-10.0.0.1:46186.service: Deactivated successfully. Feb 13 20:57:53.893067 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 20:57:53.893637 systemd-logind[1418]: Session 90 logged out. Waiting for processes to exit. Feb 13 20:57:53.894522 systemd-logind[1418]: Removed session 90. Feb 13 20:57:57.469249 kubelet[2422]: E0213 20:57:57.469209 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:57:57.469787 kubelet[2422]: E0213 20:57:57.469736 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:57:57.612222 kubelet[2422]: E0213 20:57:57.612187 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:57:58.898766 systemd[1]: Started sshd@90-10.0.0.9:22-10.0.0.1:46190.service - OpenSSH per-connection server daemon (10.0.0.1:46190). Feb 13 20:57:58.936324 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 46190 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:57:58.937613 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:57:58.941696 systemd-logind[1418]: New session 91 of user core. Feb 13 20:57:58.956264 systemd[1]: Started session-91.scope - Session 91 of User core. Feb 13 20:57:59.059829 sshd[4088]: pam_unix(sshd:session): session closed for user core Feb 13 20:57:59.063066 systemd[1]: sshd@90-10.0.0.9:22-10.0.0.1:46190.service: Deactivated successfully. Feb 13 20:57:59.064682 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 20:57:59.065983 systemd-logind[1418]: Session 91 logged out. Waiting for processes to exit. Feb 13 20:57:59.066809 systemd-logind[1418]: Removed session 91. Feb 13 20:58:00.470081 kubelet[2422]: E0213 20:58:00.469961 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:02.613125 kubelet[2422]: E0213 20:58:02.613068 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:04.070796 systemd[1]: Started sshd@91-10.0.0.9:22-10.0.0.1:40180.service - OpenSSH per-connection server daemon (10.0.0.1:40180). Feb 13 20:58:04.108656 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 40180 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:04.109966 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:04.114117 systemd-logind[1418]: New session 92 of user core. Feb 13 20:58:04.125279 systemd[1]: Started session-92.scope - Session 92 of User core. Feb 13 20:58:04.228302 sshd[4102]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:04.231435 systemd[1]: sshd@91-10.0.0.9:22-10.0.0.1:40180.service: Deactivated successfully. Feb 13 20:58:04.233018 systemd[1]: session-92.scope: Deactivated successfully. Feb 13 20:58:04.233573 systemd-logind[1418]: Session 92 logged out. Waiting for processes to exit. Feb 13 20:58:04.234416 systemd-logind[1418]: Removed session 92. Feb 13 20:58:07.613793 kubelet[2422]: E0213 20:58:07.613746 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:09.238549 systemd[1]: Started sshd@92-10.0.0.9:22-10.0.0.1:40186.service - OpenSSH per-connection server daemon (10.0.0.1:40186). Feb 13 20:58:09.275850 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 40186 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:09.277050 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:09.281388 systemd-logind[1418]: New session 93 of user core. Feb 13 20:58:09.297289 systemd[1]: Started session-93.scope - Session 93 of User core. Feb 13 20:58:09.401064 sshd[4117]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:09.403562 systemd[1]: sshd@92-10.0.0.9:22-10.0.0.1:40186.service: Deactivated successfully. Feb 13 20:58:09.405219 systemd[1]: session-93.scope: Deactivated successfully. Feb 13 20:58:09.406484 systemd-logind[1418]: Session 93 logged out. Waiting for processes to exit. Feb 13 20:58:09.407627 systemd-logind[1418]: Removed session 93. Feb 13 20:58:09.469930 kubelet[2422]: E0213 20:58:09.469904 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:09.470642 kubelet[2422]: E0213 20:58:09.470612 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:58:12.614973 kubelet[2422]: E0213 20:58:12.614934 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:14.411560 systemd[1]: Started sshd@93-10.0.0.9:22-10.0.0.1:41752.service - OpenSSH per-connection server daemon (10.0.0.1:41752). Feb 13 20:58:14.449247 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 41752 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:14.450455 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:14.454113 systemd-logind[1418]: New session 94 of user core. Feb 13 20:58:14.464223 systemd[1]: Started session-94.scope - Session 94 of User core. Feb 13 20:58:14.570032 sshd[4134]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:14.573293 systemd[1]: sshd@93-10.0.0.9:22-10.0.0.1:41752.service: Deactivated successfully. Feb 13 20:58:14.575835 systemd[1]: session-94.scope: Deactivated successfully. Feb 13 20:58:14.576754 systemd-logind[1418]: Session 94 logged out. Waiting for processes to exit. Feb 13 20:58:14.577683 systemd-logind[1418]: Removed session 94. Feb 13 20:58:16.470013 kubelet[2422]: E0213 20:58:16.469902 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:17.616361 kubelet[2422]: E0213 20:58:17.616319 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:19.580646 systemd[1]: Started sshd@94-10.0.0.9:22-10.0.0.1:41764.service - OpenSSH per-connection server daemon (10.0.0.1:41764). Feb 13 20:58:19.618363 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 41764 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:19.619737 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:19.623291 systemd-logind[1418]: New session 95 of user core. Feb 13 20:58:19.632277 systemd[1]: Started session-95.scope - Session 95 of User core. Feb 13 20:58:19.738041 sshd[4150]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:19.741389 systemd[1]: sshd@94-10.0.0.9:22-10.0.0.1:41764.service: Deactivated successfully. Feb 13 20:58:19.743044 systemd[1]: session-95.scope: Deactivated successfully. Feb 13 20:58:19.744181 systemd-logind[1418]: Session 95 logged out. Waiting for processes to exit. Feb 13 20:58:19.744985 systemd-logind[1418]: Removed session 95. Feb 13 20:58:22.470001 kubelet[2422]: E0213 20:58:22.469795 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:22.470479 kubelet[2422]: E0213 20:58:22.470430 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:58:22.617678 kubelet[2422]: E0213 20:58:22.617633 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:24.747248 systemd[1]: Started sshd@95-10.0.0.9:22-10.0.0.1:37648.service - OpenSSH per-connection server daemon (10.0.0.1:37648). Feb 13 20:58:24.784930 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 37648 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:24.786193 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:24.790336 systemd-logind[1418]: New session 96 of user core. Feb 13 20:58:24.799225 systemd[1]: Started session-96.scope - Session 96 of User core. Feb 13 20:58:24.905943 sshd[4166]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:24.909044 systemd[1]: sshd@95-10.0.0.9:22-10.0.0.1:37648.service: Deactivated successfully. Feb 13 20:58:24.911383 systemd[1]: session-96.scope: Deactivated successfully. Feb 13 20:58:24.912300 systemd-logind[1418]: Session 96 logged out. Waiting for processes to exit. Feb 13 20:58:24.913171 systemd-logind[1418]: Removed session 96. Feb 13 20:58:27.619264 kubelet[2422]: E0213 20:58:27.619227 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:29.915676 systemd[1]: Started sshd@96-10.0.0.9:22-10.0.0.1:37664.service - OpenSSH per-connection server daemon (10.0.0.1:37664). Feb 13 20:58:29.952728 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 37664 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:29.953935 sshd[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:29.957473 systemd-logind[1418]: New session 97 of user core. Feb 13 20:58:29.966277 systemd[1]: Started session-97.scope - Session 97 of User core. Feb 13 20:58:30.073455 sshd[4181]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:30.075768 systemd[1]: session-97.scope: Deactivated successfully. Feb 13 20:58:30.076948 systemd[1]: sshd@96-10.0.0.9:22-10.0.0.1:37664.service: Deactivated successfully. Feb 13 20:58:30.078753 systemd-logind[1418]: Session 97 logged out. Waiting for processes to exit. Feb 13 20:58:30.079756 systemd-logind[1418]: Removed session 97. Feb 13 20:58:30.469440 kubelet[2422]: E0213 20:58:30.469308 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:32.620190 kubelet[2422]: E0213 20:58:32.620139 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:33.470007 kubelet[2422]: E0213 20:58:33.469870 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:33.470542 kubelet[2422]: E0213 20:58:33.470492 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:58:35.083717 systemd[1]: Started sshd@97-10.0.0.9:22-10.0.0.1:41538.service - OpenSSH per-connection server daemon (10.0.0.1:41538). Feb 13 20:58:35.121706 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 41538 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:35.122837 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:35.126413 systemd-logind[1418]: New session 98 of user core. Feb 13 20:58:35.137241 systemd[1]: Started session-98.scope - Session 98 of User core. Feb 13 20:58:35.240231 sshd[4196]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:35.243414 systemd[1]: sshd@97-10.0.0.9:22-10.0.0.1:41538.service: Deactivated successfully. Feb 13 20:58:35.246226 systemd[1]: session-98.scope: Deactivated successfully. Feb 13 20:58:35.247023 systemd-logind[1418]: Session 98 logged out. Waiting for processes to exit. Feb 13 20:58:35.247963 systemd-logind[1418]: Removed session 98. Feb 13 20:58:37.621780 kubelet[2422]: E0213 20:58:37.621735 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:40.250652 systemd[1]: Started sshd@98-10.0.0.9:22-10.0.0.1:41550.service - OpenSSH per-connection server daemon (10.0.0.1:41550). Feb 13 20:58:40.288869 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 41550 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:40.290216 sshd[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:40.294557 systemd-logind[1418]: New session 99 of user core. Feb 13 20:58:40.304266 systemd[1]: Started session-99.scope - Session 99 of User core. Feb 13 20:58:40.409301 sshd[4211]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:40.412430 systemd[1]: sshd@98-10.0.0.9:22-10.0.0.1:41550.service: Deactivated successfully. Feb 13 20:58:40.414337 systemd[1]: session-99.scope: Deactivated successfully. Feb 13 20:58:40.415013 systemd-logind[1418]: Session 99 logged out. Waiting for processes to exit. Feb 13 20:58:40.415978 systemd-logind[1418]: Removed session 99. Feb 13 20:58:42.622590 kubelet[2422]: E0213 20:58:42.622554 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:44.469815 kubelet[2422]: E0213 20:58:44.469453 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:44.470472 kubelet[2422]: E0213 20:58:44.470419 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:58:45.422926 systemd[1]: Started sshd@99-10.0.0.9:22-10.0.0.1:59918.service - OpenSSH per-connection server daemon (10.0.0.1:59918). Feb 13 20:58:45.460297 sshd[4227]: Accepted publickey for core from 10.0.0.1 port 59918 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:45.461587 sshd[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:45.465139 systemd-logind[1418]: New session 100 of user core. Feb 13 20:58:45.476234 systemd[1]: Started session-100.scope - Session 100 of User core. Feb 13 20:58:45.579836 sshd[4227]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:45.583175 systemd[1]: sshd@99-10.0.0.9:22-10.0.0.1:59918.service: Deactivated successfully. Feb 13 20:58:45.584851 systemd[1]: session-100.scope: Deactivated successfully. Feb 13 20:58:45.585476 systemd-logind[1418]: Session 100 logged out. Waiting for processes to exit. Feb 13 20:58:45.586213 systemd-logind[1418]: Removed session 100. Feb 13 20:58:47.623945 kubelet[2422]: E0213 20:58:47.623891 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:50.590564 systemd[1]: Started sshd@100-10.0.0.9:22-10.0.0.1:59922.service - OpenSSH per-connection server daemon (10.0.0.1:59922). Feb 13 20:58:50.628062 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 59922 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:50.629308 sshd[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:50.633302 systemd-logind[1418]: New session 101 of user core. Feb 13 20:58:50.644235 systemd[1]: Started session-101.scope - Session 101 of User core. Feb 13 20:58:50.745873 sshd[4243]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:50.749066 systemd[1]: sshd@100-10.0.0.9:22-10.0.0.1:59922.service: Deactivated successfully. Feb 13 20:58:50.750747 systemd[1]: session-101.scope: Deactivated successfully. Feb 13 20:58:50.751354 systemd-logind[1418]: Session 101 logged out. Waiting for processes to exit. Feb 13 20:58:50.752125 systemd-logind[1418]: Removed session 101. Feb 13 20:58:52.625617 kubelet[2422]: E0213 20:58:52.625557 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:55.469301 kubelet[2422]: E0213 20:58:55.469263 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:55.760665 systemd[1]: Started sshd@101-10.0.0.9:22-10.0.0.1:34254.service - OpenSSH per-connection server daemon (10.0.0.1:34254). Feb 13 20:58:55.797947 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 34254 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:58:55.799183 sshd[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:58:55.803269 systemd-logind[1418]: New session 102 of user core. Feb 13 20:58:55.813224 systemd[1]: Started session-102.scope - Session 102 of User core. Feb 13 20:58:55.914426 sshd[4258]: pam_unix(sshd:session): session closed for user core Feb 13 20:58:55.917535 systemd[1]: sshd@101-10.0.0.9:22-10.0.0.1:34254.service: Deactivated successfully. Feb 13 20:58:55.919943 systemd[1]: session-102.scope: Deactivated successfully. Feb 13 20:58:55.920779 systemd-logind[1418]: Session 102 logged out. Waiting for processes to exit. Feb 13 20:58:55.921577 systemd-logind[1418]: Removed session 102. Feb 13 20:58:57.627070 kubelet[2422]: E0213 20:58:57.627020 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:58:59.469486 kubelet[2422]: E0213 20:58:59.469447 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:58:59.470278 kubelet[2422]: E0213 20:58:59.470044 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:59:00.924752 systemd[1]: Started sshd@102-10.0.0.9:22-10.0.0.1:34262.service - OpenSSH per-connection server daemon (10.0.0.1:34262). Feb 13 20:59:00.962302 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 34262 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:00.963515 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:00.967233 systemd-logind[1418]: New session 103 of user core. Feb 13 20:59:00.974247 systemd[1]: Started session-103.scope - Session 103 of User core. Feb 13 20:59:01.076571 sshd[4273]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:01.079627 systemd[1]: sshd@102-10.0.0.9:22-10.0.0.1:34262.service: Deactivated successfully. Feb 13 20:59:01.081872 systemd[1]: session-103.scope: Deactivated successfully. Feb 13 20:59:01.082907 systemd-logind[1418]: Session 103 logged out. Waiting for processes to exit. Feb 13 20:59:01.083837 systemd-logind[1418]: Removed session 103. Feb 13 20:59:02.470021 kubelet[2422]: E0213 20:59:02.469944 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:02.628564 kubelet[2422]: E0213 20:59:02.628515 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:06.090772 systemd[1]: Started sshd@103-10.0.0.9:22-10.0.0.1:36708.service - OpenSSH per-connection server daemon (10.0.0.1:36708). Feb 13 20:59:06.128587 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 36708 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:06.129812 sshd[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:06.133798 systemd-logind[1418]: New session 104 of user core. Feb 13 20:59:06.145239 systemd[1]: Started session-104.scope - Session 104 of User core. Feb 13 20:59:06.249362 sshd[4287]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:06.251823 systemd[1]: sshd@103-10.0.0.9:22-10.0.0.1:36708.service: Deactivated successfully. Feb 13 20:59:06.253774 systemd[1]: session-104.scope: Deactivated successfully. Feb 13 20:59:06.255363 systemd-logind[1418]: Session 104 logged out. Waiting for processes to exit. Feb 13 20:59:06.256815 systemd-logind[1418]: Removed session 104. Feb 13 20:59:07.629394 kubelet[2422]: E0213 20:59:07.629353 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:11.259885 systemd[1]: Started sshd@104-10.0.0.9:22-10.0.0.1:36724.service - OpenSSH per-connection server daemon (10.0.0.1:36724). Feb 13 20:59:11.297346 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 36724 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:11.298592 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:11.302500 systemd-logind[1418]: New session 105 of user core. Feb 13 20:59:11.308329 systemd[1]: Started session-105.scope - Session 105 of User core. Feb 13 20:59:11.413047 sshd[4302]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:11.415553 systemd[1]: sshd@104-10.0.0.9:22-10.0.0.1:36724.service: Deactivated successfully. Feb 13 20:59:11.417175 systemd[1]: session-105.scope: Deactivated successfully. Feb 13 20:59:11.418473 systemd-logind[1418]: Session 105 logged out. Waiting for processes to exit. Feb 13 20:59:11.419521 systemd-logind[1418]: Removed session 105. Feb 13 20:59:12.630062 kubelet[2422]: E0213 20:59:12.630019 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:14.470669 kubelet[2422]: E0213 20:59:14.470498 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:14.471238 kubelet[2422]: E0213 20:59:14.471153 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:59:16.426638 systemd[1]: Started sshd@105-10.0.0.9:22-10.0.0.1:58128.service - OpenSSH per-connection server daemon (10.0.0.1:58128). Feb 13 20:59:16.463794 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 58128 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:16.465072 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:16.468755 systemd-logind[1418]: New session 106 of user core. Feb 13 20:59:16.473446 systemd[1]: Started session-106.scope - Session 106 of User core. Feb 13 20:59:16.577981 sshd[4317]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:16.580452 systemd[1]: sshd@105-10.0.0.9:22-10.0.0.1:58128.service: Deactivated successfully. Feb 13 20:59:16.582373 systemd[1]: session-106.scope: Deactivated successfully. Feb 13 20:59:16.583842 systemd-logind[1418]: Session 106 logged out. Waiting for processes to exit. Feb 13 20:59:16.584796 systemd-logind[1418]: Removed session 106. Feb 13 20:59:17.631544 kubelet[2422]: E0213 20:59:17.631496 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:21.589746 systemd[1]: Started sshd@106-10.0.0.9:22-10.0.0.1:58140.service - OpenSSH per-connection server daemon (10.0.0.1:58140). Feb 13 20:59:21.627077 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 58140 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:21.628315 sshd[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:21.631819 systemd-logind[1418]: New session 107 of user core. Feb 13 20:59:21.643229 systemd[1]: Started session-107.scope - Session 107 of User core. Feb 13 20:59:21.745147 sshd[4333]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:21.748289 systemd[1]: sshd@106-10.0.0.9:22-10.0.0.1:58140.service: Deactivated successfully. Feb 13 20:59:21.749876 systemd[1]: session-107.scope: Deactivated successfully. Feb 13 20:59:21.751161 systemd-logind[1418]: Session 107 logged out. Waiting for processes to exit. Feb 13 20:59:21.751892 systemd-logind[1418]: Removed session 107. Feb 13 20:59:22.632535 kubelet[2422]: E0213 20:59:22.632478 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:26.755585 systemd[1]: Started sshd@107-10.0.0.9:22-10.0.0.1:48886.service - OpenSSH per-connection server daemon (10.0.0.1:48886). Feb 13 20:59:26.792936 sshd[4347]: Accepted publickey for core from 10.0.0.1 port 48886 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:26.794147 sshd[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:26.797621 systemd-logind[1418]: New session 108 of user core. Feb 13 20:59:26.805213 systemd[1]: Started session-108.scope - Session 108 of User core. Feb 13 20:59:26.907851 sshd[4347]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:26.911076 systemd[1]: sshd@107-10.0.0.9:22-10.0.0.1:48886.service: Deactivated successfully. Feb 13 20:59:26.912732 systemd[1]: session-108.scope: Deactivated successfully. Feb 13 20:59:26.913391 systemd-logind[1418]: Session 108 logged out. Waiting for processes to exit. Feb 13 20:59:26.914284 systemd-logind[1418]: Removed session 108. Feb 13 20:59:27.469250 kubelet[2422]: E0213 20:59:27.469201 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:27.469809 kubelet[2422]: E0213 20:59:27.469760 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:59:27.633958 kubelet[2422]: E0213 20:59:27.633915 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:31.921502 systemd[1]: Started sshd@108-10.0.0.9:22-10.0.0.1:48902.service - OpenSSH per-connection server daemon (10.0.0.1:48902). Feb 13 20:59:31.958751 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 48902 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:31.959879 sshd[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:31.964130 systemd-logind[1418]: New session 109 of user core. Feb 13 20:59:31.973228 systemd[1]: Started session-109.scope - Session 109 of User core. Feb 13 20:59:32.076948 sshd[4362]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:32.080109 systemd[1]: sshd@108-10.0.0.9:22-10.0.0.1:48902.service: Deactivated successfully. Feb 13 20:59:32.081909 systemd[1]: session-109.scope: Deactivated successfully. Feb 13 20:59:32.082468 systemd-logind[1418]: Session 109 logged out. Waiting for processes to exit. Feb 13 20:59:32.083263 systemd-logind[1418]: Removed session 109. Feb 13 20:59:32.634930 kubelet[2422]: E0213 20:59:32.634884 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:35.469589 kubelet[2422]: E0213 20:59:35.469534 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:37.087546 systemd[1]: Started sshd@109-10.0.0.9:22-10.0.0.1:50296.service - OpenSSH per-connection server daemon (10.0.0.1:50296). Feb 13 20:59:37.124937 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 50296 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:37.126160 sshd[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:37.129585 systemd-logind[1418]: New session 110 of user core. Feb 13 20:59:37.142238 systemd[1]: Started session-110.scope - Session 110 of User core. Feb 13 20:59:37.244292 sshd[4377]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:37.247514 systemd[1]: sshd@109-10.0.0.9:22-10.0.0.1:50296.service: Deactivated successfully. Feb 13 20:59:37.249077 systemd[1]: session-110.scope: Deactivated successfully. Feb 13 20:59:37.250574 systemd-logind[1418]: Session 110 logged out. Waiting for processes to exit. Feb 13 20:59:37.251874 systemd-logind[1418]: Removed session 110. Feb 13 20:59:37.635954 kubelet[2422]: E0213 20:59:37.635899 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:41.470074 kubelet[2422]: E0213 20:59:41.470010 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:41.470726 kubelet[2422]: E0213 20:59:41.470677 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:59:42.256161 systemd[1]: Started sshd@110-10.0.0.9:22-10.0.0.1:50310.service - OpenSSH per-connection server daemon (10.0.0.1:50310). Feb 13 20:59:42.293532 sshd[4392]: Accepted publickey for core from 10.0.0.1 port 50310 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:42.294658 sshd[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:42.297860 systemd-logind[1418]: New session 111 of user core. Feb 13 20:59:42.305247 systemd[1]: Started session-111.scope - Session 111 of User core. Feb 13 20:59:42.409714 sshd[4392]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:42.412831 systemd[1]: sshd@110-10.0.0.9:22-10.0.0.1:50310.service: Deactivated successfully. Feb 13 20:59:42.414371 systemd[1]: session-111.scope: Deactivated successfully. Feb 13 20:59:42.414925 systemd-logind[1418]: Session 111 logged out. Waiting for processes to exit. Feb 13 20:59:42.415640 systemd-logind[1418]: Removed session 111. Feb 13 20:59:42.637211 kubelet[2422]: E0213 20:59:42.637170 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:46.469417 kubelet[2422]: E0213 20:59:46.469335 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:47.420581 systemd[1]: Started sshd@111-10.0.0.9:22-10.0.0.1:40080.service - OpenSSH per-connection server daemon (10.0.0.1:40080). Feb 13 20:59:47.457991 sshd[4409]: Accepted publickey for core from 10.0.0.1 port 40080 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:47.459141 sshd[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:47.462617 systemd-logind[1418]: New session 112 of user core. Feb 13 20:59:47.474232 systemd[1]: Started session-112.scope - Session 112 of User core. Feb 13 20:59:47.578297 sshd[4409]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:47.581568 systemd[1]: sshd@111-10.0.0.9:22-10.0.0.1:40080.service: Deactivated successfully. Feb 13 20:59:47.583771 systemd[1]: session-112.scope: Deactivated successfully. Feb 13 20:59:47.584349 systemd-logind[1418]: Session 112 logged out. Waiting for processes to exit. Feb 13 20:59:47.585071 systemd-logind[1418]: Removed session 112. Feb 13 20:59:47.637946 kubelet[2422]: E0213 20:59:47.637910 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:52.469282 kubelet[2422]: E0213 20:59:52.469250 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:59:52.470034 kubelet[2422]: E0213 20:59:52.469817 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 20:59:52.588578 systemd[1]: Started sshd@112-10.0.0.9:22-10.0.0.1:49124.service - OpenSSH per-connection server daemon (10.0.0.1:49124). Feb 13 20:59:52.625906 sshd[4426]: Accepted publickey for core from 10.0.0.1 port 49124 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:52.627178 sshd[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:52.630770 systemd-logind[1418]: New session 113 of user core. Feb 13 20:59:52.639046 kubelet[2422]: E0213 20:59:52.639012 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:52.645256 systemd[1]: Started session-113.scope - Session 113 of User core. Feb 13 20:59:52.747167 sshd[4426]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:52.750792 systemd[1]: sshd@112-10.0.0.9:22-10.0.0.1:49124.service: Deactivated successfully. Feb 13 20:59:52.752977 systemd[1]: session-113.scope: Deactivated successfully. Feb 13 20:59:52.753969 systemd-logind[1418]: Session 113 logged out. Waiting for processes to exit. Feb 13 20:59:52.754818 systemd-logind[1418]: Removed session 113. Feb 13 20:59:57.639795 kubelet[2422]: E0213 20:59:57.639743 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:59:57.757632 systemd[1]: Started sshd@113-10.0.0.9:22-10.0.0.1:49140.service - OpenSSH per-connection server daemon (10.0.0.1:49140). Feb 13 20:59:57.795226 sshd[4440]: Accepted publickey for core from 10.0.0.1 port 49140 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 20:59:57.796344 sshd[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:59:57.800010 systemd-logind[1418]: New session 114 of user core. Feb 13 20:59:57.811230 systemd[1]: Started session-114.scope - Session 114 of User core. Feb 13 20:59:57.914129 sshd[4440]: pam_unix(sshd:session): session closed for user core Feb 13 20:59:57.918751 systemd-logind[1418]: Session 114 logged out. Waiting for processes to exit. Feb 13 20:59:57.919211 systemd[1]: sshd@113-10.0.0.9:22-10.0.0.1:49140.service: Deactivated successfully. Feb 13 20:59:57.920920 systemd[1]: session-114.scope: Deactivated successfully. Feb 13 20:59:57.922624 systemd-logind[1418]: Removed session 114. Feb 13 21:00:02.641432 kubelet[2422]: E0213 21:00:02.641378 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 21:00:02.924716 systemd[1]: Started sshd@114-10.0.0.9:22-10.0.0.1:48696.service - OpenSSH per-connection server daemon (10.0.0.1:48696). Feb 13 21:00:02.963335 sshd[4455]: Accepted publickey for core from 10.0.0.1 port 48696 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 21:00:02.964494 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:00:02.968371 systemd-logind[1418]: New session 115 of user core. Feb 13 21:00:02.978231 systemd[1]: Started session-115.scope - Session 115 of User core. Feb 13 21:00:03.083678 sshd[4455]: pam_unix(sshd:session): session closed for user core Feb 13 21:00:03.087138 systemd[1]: sshd@114-10.0.0.9:22-10.0.0.1:48696.service: Deactivated successfully. Feb 13 21:00:03.089495 systemd[1]: session-115.scope: Deactivated successfully. Feb 13 21:00:03.090815 systemd-logind[1418]: Session 115 logged out. Waiting for processes to exit. Feb 13 21:00:03.091713 systemd-logind[1418]: Removed session 115. Feb 13 21:00:05.469631 kubelet[2422]: E0213 21:00:05.469506 2422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 21:00:05.470218 kubelet[2422]: E0213 21:00:05.470177 2422 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-chdvq" podUID="21e6622a-36e3-47a8-b025-f56eaad98d84" Feb 13 21:00:07.642982 kubelet[2422]: E0213 21:00:07.642940 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 21:00:08.097590 systemd[1]: Started sshd@115-10.0.0.9:22-10.0.0.1:48700.service - OpenSSH per-connection server daemon (10.0.0.1:48700). Feb 13 21:00:08.135344 sshd[4471]: Accepted publickey for core from 10.0.0.1 port 48700 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 21:00:08.136477 sshd[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:00:08.140130 systemd-logind[1418]: New session 116 of user core. Feb 13 21:00:08.147227 systemd[1]: Started session-116.scope - Session 116 of User core. Feb 13 21:00:08.251155 sshd[4471]: pam_unix(sshd:session): session closed for user core Feb 13 21:00:08.254311 systemd[1]: sshd@115-10.0.0.9:22-10.0.0.1:48700.service: Deactivated successfully. Feb 13 21:00:08.256535 systemd[1]: session-116.scope: Deactivated successfully. Feb 13 21:00:08.257196 systemd-logind[1418]: Session 116 logged out. Waiting for processes to exit. Feb 13 21:00:08.258195 systemd-logind[1418]: Removed session 116. Feb 13 21:00:12.643651 kubelet[2422]: E0213 21:00:12.643607 2422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 21:00:13.261576 systemd[1]: Started sshd@116-10.0.0.9:22-10.0.0.1:36264.service - OpenSSH per-connection server daemon (10.0.0.1:36264). Feb 13 21:00:13.299011 sshd[4485]: Accepted publickey for core from 10.0.0.1 port 36264 ssh2: RSA SHA256:cMrn/QNbOj3/muQN24sDQmRTV2Hbz1QD+PbuA/uipHg Feb 13 21:00:13.300223 sshd[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:00:13.303920 systemd-logind[1418]: New session 117 of user core. Feb 13 21:00:13.316214 systemd[1]: Started session-117.scope - Session 117 of User core. Feb 13 21:00:13.420317 sshd[4485]: pam_unix(sshd:session): session closed for user core Feb 13 21:00:13.423514 systemd[1]: sshd@116-10.0.0.9:22-10.0.0.1:36264.service: Deactivated successfully. Feb 13 21:00:13.425107 systemd[1]: session-117.scope: Deactivated successfully. Feb 13 21:00:13.426496 systemd-logind[1418]: Session 117 logged out. Waiting for processes to exit. Feb 13 21:00:13.427425 systemd-logind[1418]: Removed session 117.