Feb 13 20:16:54.955177 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:16:54.955198 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 20:16:54.955207 kernel: KASLR enabled Feb 13 20:16:54.955213 kernel: efi: EFI v2.7 by EDK II Feb 13 20:16:54.955219 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 20:16:54.955224 kernel: random: crng init done Feb 13 20:16:54.955231 kernel: ACPI: Early table checksum verification disabled Feb 13 20:16:54.955237 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 20:16:54.955244 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 20:16:54.955251 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:54.955257 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:54.955272 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:54.955278 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:54.955284 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:54.955291 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:54.955300 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:54.955306 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:54.955313 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:16:54.955319 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 20:16:54.955325 kernel: NUMA: Failed to initialise from firmware Feb 13 20:16:54.955332 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:16:54.955338 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 20:16:54.955344 kernel: Zone ranges: Feb 13 20:16:54.955351 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:16:54.955357 kernel: DMA32 empty Feb 13 20:16:54.955365 kernel: Normal empty Feb 13 20:16:54.955371 kernel: Movable zone start for each node Feb 13 20:16:54.955377 kernel: Early memory node ranges Feb 13 20:16:54.955384 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 20:16:54.955390 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 20:16:54.955397 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 20:16:54.955403 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 20:16:54.955409 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 20:16:54.955416 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 20:16:54.955431 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 20:16:54.955438 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:16:54.955445 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 20:16:54.955453 kernel: psci: probing for conduit method from ACPI. Feb 13 20:16:54.955459 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:16:54.955466 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:16:54.955475 kernel: psci: Trusted OS migration not required Feb 13 20:16:54.955481 kernel: psci: SMC Calling Convention v1.1 Feb 13 20:16:54.955488 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 20:16:54.955496 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:16:54.955504 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:16:54.955510 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 20:16:54.955517 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:16:54.955524 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:16:54.955531 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:16:54.955538 kernel: CPU features: detected: Spectre-v4 Feb 13 20:16:54.955544 kernel: CPU features: detected: Spectre-BHB Feb 13 20:16:54.955551 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:16:54.955558 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:16:54.955566 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:16:54.955572 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:16:54.955579 kernel: alternatives: applying boot alternatives Feb 13 20:16:54.955587 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:16:54.955594 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:16:54.955601 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:16:54.955608 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:16:54.955614 kernel: Fallback order for Node 0: 0 Feb 13 20:16:54.955621 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 20:16:54.955628 kernel: Policy zone: DMA Feb 13 20:16:54.955634 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:16:54.955642 kernel: software IO TLB: area num 4. Feb 13 20:16:54.955649 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 20:16:54.955656 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Feb 13 20:16:54.955663 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 20:16:54.955670 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:16:54.955677 kernel: rcu: RCU event tracing is enabled. Feb 13 20:16:54.955684 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 20:16:54.955691 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:16:54.955698 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:16:54.955705 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:16:54.955712 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 20:16:54.955718 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:16:54.955726 kernel: GICv3: 256 SPIs implemented Feb 13 20:16:54.955733 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:16:54.955740 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:16:54.955746 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:16:54.955753 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 20:16:54.955760 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 20:16:54.955767 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 20:16:54.955774 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 20:16:54.955781 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 20:16:54.955787 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 20:16:54.955794 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:16:54.955802 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:16:54.955809 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:16:54.955816 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:16:54.955823 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:16:54.955830 kernel: arm-pv: using stolen time PV Feb 13 20:16:54.955837 kernel: Console: colour dummy device 80x25 Feb 13 20:16:54.955844 kernel: ACPI: Core revision 20230628 Feb 13 20:16:54.955851 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:16:54.955858 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:16:54.955865 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:16:54.955873 kernel: landlock: Up and running. Feb 13 20:16:54.955880 kernel: SELinux: Initializing. Feb 13 20:16:54.955886 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:16:54.955894 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:16:54.955901 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:16:54.955908 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:16:54.955915 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:16:54.955922 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:16:54.955929 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 20:16:54.955936 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 20:16:54.955943 kernel: Remapping and enabling EFI services. Feb 13 20:16:54.955950 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:16:54.955957 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:16:54.955964 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 20:16:54.955971 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 20:16:54.955978 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:16:54.955985 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:16:54.955992 kernel: Detected PIPT I-cache on CPU2 Feb 13 20:16:54.955999 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 20:16:54.956007 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 20:16:54.956014 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:16:54.956026 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 20:16:54.956034 kernel: Detected PIPT I-cache on CPU3 Feb 13 20:16:54.956042 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 20:16:54.956049 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 20:16:54.956056 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:16:54.956063 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 20:16:54.956071 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 20:16:54.956080 kernel: SMP: Total of 4 processors activated. Feb 13 20:16:54.956087 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:16:54.956094 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:16:54.956102 kernel: CPU features: detected: Common not Private translations Feb 13 20:16:54.956109 kernel: CPU features: detected: CRC32 instructions Feb 13 20:16:54.956116 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 20:16:54.956124 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:16:54.956131 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:16:54.956140 kernel: CPU features: detected: Privileged Access Never Feb 13 20:16:54.956147 kernel: CPU features: detected: RAS Extension Support Feb 13 20:16:54.956154 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 20:16:54.956161 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:16:54.956169 kernel: alternatives: applying system-wide alternatives Feb 13 20:16:54.956176 kernel: devtmpfs: initialized Feb 13 20:16:54.956183 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:16:54.956191 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 20:16:54.956198 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:16:54.956207 kernel: SMBIOS 3.0.0 present. Feb 13 20:16:54.956214 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 20:16:54.956221 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:16:54.956229 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:16:54.956236 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:16:54.956243 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:16:54.956251 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:16:54.956258 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Feb 13 20:16:54.956270 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:16:54.956279 kernel: cpuidle: using governor menu Feb 13 20:16:54.956286 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:16:54.956294 kernel: ASID allocator initialised with 32768 entries Feb 13 20:16:54.956301 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:16:54.956308 kernel: Serial: AMBA PL011 UART driver Feb 13 20:16:54.956316 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:16:54.956323 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:16:54.956330 kernel: Modules: 509040 pages in range for PLT usage Feb 13 20:16:54.956338 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:16:54.956346 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:16:54.956354 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:16:54.956361 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:16:54.956368 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:16:54.956376 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:16:54.956383 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:16:54.956390 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:16:54.956397 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:16:54.956404 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:16:54.956413 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:16:54.956420 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:16:54.956446 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:16:54.956454 kernel: ACPI: Interpreter enabled Feb 13 20:16:54.956462 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:16:54.956469 kernel: ACPI: MCFG table detected, 1 entries Feb 13 20:16:54.956477 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:16:54.956484 kernel: printk: console [ttyAMA0] enabled Feb 13 20:16:54.956491 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:16:54.956625 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:16:54.956700 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 20:16:54.956767 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 20:16:54.956831 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 20:16:54.956896 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 20:16:54.956905 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 20:16:54.956913 kernel: PCI host bridge to bus 0000:00 Feb 13 20:16:54.956989 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 20:16:54.957050 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 20:16:54.957109 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 20:16:54.957168 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:16:54.957249 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 20:16:54.957344 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:16:54.957505 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 20:16:54.957595 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 20:16:54.957665 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:16:54.957731 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:16:54.957799 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 20:16:54.957865 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 20:16:54.957927 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 20:16:54.957991 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 20:16:54.958050 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 20:16:54.958060 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 20:16:54.958068 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 20:16:54.958076 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 20:16:54.958083 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 20:16:54.958090 kernel: iommu: Default domain type: Translated Feb 13 20:16:54.958098 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:16:54.958105 kernel: efivars: Registered efivars operations Feb 13 20:16:54.958114 kernel: vgaarb: loaded Feb 13 20:16:54.958121 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:16:54.958129 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:16:54.958136 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:16:54.958144 kernel: pnp: PnP ACPI init Feb 13 20:16:54.958214 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 20:16:54.958224 kernel: pnp: PnP ACPI: found 1 devices Feb 13 20:16:54.958232 kernel: NET: Registered PF_INET protocol family Feb 13 20:16:54.958241 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:16:54.958249 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:16:54.958256 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:16:54.958271 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:16:54.958278 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:16:54.958286 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:16:54.958293 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:16:54.958300 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:16:54.958308 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:16:54.958317 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:16:54.958324 kernel: kvm [1]: HYP mode not available Feb 13 20:16:54.958332 kernel: Initialise system trusted keyrings Feb 13 20:16:54.958339 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:16:54.958346 kernel: Key type asymmetric registered Feb 13 20:16:54.958353 kernel: Asymmetric key parser 'x509' registered Feb 13 20:16:54.958360 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:16:54.958368 kernel: io scheduler mq-deadline registered Feb 13 20:16:54.958375 kernel: io scheduler kyber registered Feb 13 20:16:54.958384 kernel: io scheduler bfq registered Feb 13 20:16:54.958391 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 20:16:54.958399 kernel: ACPI: button: Power Button [PWRB] Feb 13 20:16:54.958406 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 20:16:54.958501 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 20:16:54.958512 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:16:54.958520 kernel: thunder_xcv, ver 1.0 Feb 13 20:16:54.958527 kernel: thunder_bgx, ver 1.0 Feb 13 20:16:54.958534 kernel: nicpf, ver 1.0 Feb 13 20:16:54.958544 kernel: nicvf, ver 1.0 Feb 13 20:16:54.958622 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:16:54.958687 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:16:54 UTC (1739477814) Feb 13 20:16:54.958697 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:16:54.958704 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 20:16:54.958712 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:16:54.958719 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:16:54.958727 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:16:54.958736 kernel: Segment Routing with IPv6 Feb 13 20:16:54.958744 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:16:54.958751 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:16:54.958758 kernel: Key type dns_resolver registered Feb 13 20:16:54.958765 kernel: registered taskstats version 1 Feb 13 20:16:54.958773 kernel: Loading compiled-in X.509 certificates Feb 13 20:16:54.958780 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 20:16:54.958787 kernel: Key type .fscrypt registered Feb 13 20:16:54.958795 kernel: Key type fscrypt-provisioning registered Feb 13 20:16:54.958803 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:16:54.958811 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:16:54.958818 kernel: ima: No architecture policies found Feb 13 20:16:54.958826 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:16:54.958833 kernel: clk: Disabling unused clocks Feb 13 20:16:54.958841 kernel: Freeing unused kernel memory: 39360K Feb 13 20:16:54.958848 kernel: Run /init as init process Feb 13 20:16:54.958855 kernel: with arguments: Feb 13 20:16:54.958877 kernel: /init Feb 13 20:16:54.958886 kernel: with environment: Feb 13 20:16:54.958893 kernel: HOME=/ Feb 13 20:16:54.958901 kernel: TERM=linux Feb 13 20:16:54.958909 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:16:54.958918 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:16:54.958927 systemd[1]: Detected virtualization kvm. Feb 13 20:16:54.958936 systemd[1]: Detected architecture arm64. Feb 13 20:16:54.958945 systemd[1]: Running in initrd. Feb 13 20:16:54.958953 systemd[1]: No hostname configured, using default hostname. Feb 13 20:16:54.958961 systemd[1]: Hostname set to . Feb 13 20:16:54.958969 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:16:54.958977 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:16:54.958985 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:16:54.958993 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:16:54.959001 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:16:54.959011 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:16:54.959019 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:16:54.959027 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:16:54.959036 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:16:54.959044 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:16:54.959052 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:16:54.959060 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:16:54.959069 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:16:54.959077 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:16:54.959085 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:16:54.959093 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:16:54.959100 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:16:54.959108 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:16:54.959116 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:16:54.959124 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:16:54.959132 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:16:54.959141 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:16:54.959149 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:16:54.959157 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:16:54.959165 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:16:54.959173 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:16:54.959180 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:16:54.959188 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:16:54.959196 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:16:54.959205 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:16:54.959213 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:54.959221 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:16:54.959229 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:16:54.959237 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:16:54.959246 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:16:54.959255 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:16:54.959289 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 20:16:54.959308 systemd-journald[238]: Journal started Feb 13 20:16:54.959330 systemd-journald[238]: Runtime Journal (/run/log/journal/8a8af9f6106b400981c296365c9d642b) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:16:54.951910 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 20:16:54.966456 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:16:54.968779 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:16:54.968796 kernel: Bridge firewalling registered Feb 13 20:16:54.969317 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 20:16:54.971123 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:16:54.972570 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:16:54.974629 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:54.976453 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:16:54.980327 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:16:54.982140 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:16:54.986562 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:16:54.995697 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:16:54.997650 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:16:55.008651 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:16:55.009889 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:55.013834 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:16:55.026221 dracut-cmdline[280]: dracut-dracut-053 Feb 13 20:16:55.028702 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:16:55.037204 systemd-resolved[274]: Positive Trust Anchors: Feb 13 20:16:55.037222 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:16:55.037255 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:16:55.041925 systemd-resolved[274]: Defaulting to hostname 'linux'. Feb 13 20:16:55.045499 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:16:55.046633 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:16:55.094448 kernel: SCSI subsystem initialized Feb 13 20:16:55.099441 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:16:55.106460 kernel: iscsi: registered transport (tcp) Feb 13 20:16:55.119453 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:16:55.119472 kernel: QLogic iSCSI HBA Driver Feb 13 20:16:55.162170 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:16:55.168582 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:16:55.185783 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:16:55.185837 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:16:55.186882 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:16:55.234458 kernel: raid6: neonx8 gen() 15754 MB/s Feb 13 20:16:55.251447 kernel: raid6: neonx4 gen() 15629 MB/s Feb 13 20:16:55.268444 kernel: raid6: neonx2 gen() 13231 MB/s Feb 13 20:16:55.285445 kernel: raid6: neonx1 gen() 10463 MB/s Feb 13 20:16:55.302445 kernel: raid6: int64x8 gen() 6938 MB/s Feb 13 20:16:55.319444 kernel: raid6: int64x4 gen() 7333 MB/s Feb 13 20:16:55.336443 kernel: raid6: int64x2 gen() 6117 MB/s Feb 13 20:16:55.353539 kernel: raid6: int64x1 gen() 5047 MB/s Feb 13 20:16:55.353558 kernel: raid6: using algorithm neonx8 gen() 15754 MB/s Feb 13 20:16:55.371521 kernel: raid6: .... xor() 11927 MB/s, rmw enabled Feb 13 20:16:55.371538 kernel: raid6: using neon recovery algorithm Feb 13 20:16:55.376941 kernel: xor: measuring software checksum speed Feb 13 20:16:55.376967 kernel: 8regs : 18916 MB/sec Feb 13 20:16:55.377652 kernel: 32regs : 19679 MB/sec Feb 13 20:16:55.378904 kernel: arm64_neon : 26998 MB/sec Feb 13 20:16:55.378915 kernel: xor: using function: arm64_neon (26998 MB/sec) Feb 13 20:16:55.430463 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:16:55.441269 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:16:55.451565 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:16:55.463900 systemd-udevd[462]: Using default interface naming scheme 'v255'. Feb 13 20:16:55.467073 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:16:55.474578 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:16:55.486059 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Feb 13 20:16:55.517488 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:16:55.530621 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:16:55.571733 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:16:55.580624 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:16:55.592484 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:16:55.595081 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:16:55.597598 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:16:55.599821 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:16:55.608065 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:16:55.619185 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 20:16:55.628047 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 20:16:55.628149 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:16:55.628160 kernel: GPT:9289727 != 19775487 Feb 13 20:16:55.628170 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:16:55.628179 kernel: GPT:9289727 != 19775487 Feb 13 20:16:55.628188 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:16:55.628197 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:16:55.618651 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:16:55.626570 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:16:55.626692 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:55.628046 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:16:55.629179 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:16:55.629480 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:55.632518 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:55.638648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:16:55.651451 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (522) Feb 13 20:16:55.651494 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (516) Feb 13 20:16:55.652800 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:55.658062 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:16:55.665362 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:16:55.669973 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:16:55.673959 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:16:55.675178 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:16:55.689571 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:16:55.691444 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:16:55.697985 disk-uuid[550]: Primary Header is updated. Feb 13 20:16:55.697985 disk-uuid[550]: Secondary Entries is updated. Feb 13 20:16:55.697985 disk-uuid[550]: Secondary Header is updated. Feb 13 20:16:55.701444 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:16:55.713098 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:56.714476 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:16:56.714653 disk-uuid[552]: The operation has completed successfully. Feb 13 20:16:56.737179 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:16:56.737287 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:16:56.758610 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:16:56.761490 sh[574]: Success Feb 13 20:16:56.773441 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:16:56.817342 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:16:56.831817 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:16:56.833501 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:16:56.843289 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 20:16:56.843326 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:16:56.843337 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:16:56.845198 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:16:56.845214 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:16:56.849680 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:16:56.850733 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:16:56.859582 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:16:56.861313 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:16:56.868471 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:16:56.868518 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:16:56.868536 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:16:56.871448 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:16:56.882450 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:16:56.882459 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:16:56.888189 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:16:56.897674 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:16:56.966473 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:16:56.976740 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:16:57.012451 systemd-networkd[765]: lo: Link UP Feb 13 20:16:57.013261 systemd-networkd[765]: lo: Gained carrier Feb 13 20:16:57.013985 systemd-networkd[765]: Enumeration completed Feb 13 20:16:57.014291 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:16:57.015546 systemd[1]: Reached target network.target - Network. Feb 13 20:16:57.016941 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:16:57.016944 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:16:57.017798 systemd-networkd[765]: eth0: Link UP Feb 13 20:16:57.017801 systemd-networkd[765]: eth0: Gained carrier Feb 13 20:16:57.017808 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:16:57.028445 ignition[665]: Ignition 2.19.0 Feb 13 20:16:57.028459 ignition[665]: Stage: fetch-offline Feb 13 20:16:57.028498 ignition[665]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:57.028507 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:57.028714 ignition[665]: parsed url from cmdline: "" Feb 13 20:16:57.028717 ignition[665]: no config URL provided Feb 13 20:16:57.028722 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:16:57.028729 ignition[665]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:16:57.028769 ignition[665]: op(1): [started] loading QEMU firmware config module Feb 13 20:16:57.028774 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 20:16:57.045477 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:16:57.047013 ignition[665]: op(1): [finished] loading QEMU firmware config module Feb 13 20:16:57.067831 ignition[665]: parsing config with SHA512: 64b8fe258fd051485d23c46b5897f07590cb64eafac60eb72e5f5905423630d4a259a942d63f7d99beb7d5ac0cafbaf23879342333569386149578bf7cb20556 Feb 13 20:16:57.072602 unknown[665]: fetched base config from "system" Feb 13 20:16:57.072616 unknown[665]: fetched user config from "qemu" Feb 13 20:16:57.072990 ignition[665]: fetch-offline: fetch-offline passed Feb 13 20:16:57.074593 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:16:57.073052 ignition[665]: Ignition finished successfully Feb 13 20:16:57.076395 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:16:57.086565 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:16:57.097903 ignition[771]: Ignition 2.19.0 Feb 13 20:16:57.097916 ignition[771]: Stage: kargs Feb 13 20:16:57.098093 ignition[771]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:57.098103 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:57.102131 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:16:57.098972 ignition[771]: kargs: kargs passed Feb 13 20:16:57.099018 ignition[771]: Ignition finished successfully Feb 13 20:16:57.114587 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:16:57.123692 ignition[779]: Ignition 2.19.0 Feb 13 20:16:57.123702 ignition[779]: Stage: disks Feb 13 20:16:57.123855 ignition[779]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:57.123864 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:57.126285 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:16:57.124670 ignition[779]: disks: disks passed Feb 13 20:16:57.127563 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:16:57.124711 ignition[779]: Ignition finished successfully Feb 13 20:16:57.129332 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:16:57.131304 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:16:57.132726 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:16:57.134530 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:16:57.136819 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:16:57.150354 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:16:57.153862 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:16:57.156594 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:16:57.201436 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 20:16:57.201791 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:16:57.203119 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:16:57.214505 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:16:57.216175 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:16:57.217279 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:16:57.217321 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:16:57.217343 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:16:57.229374 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (797) Feb 13 20:16:57.229396 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:16:57.229407 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:16:57.229417 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:16:57.221897 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:16:57.226864 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:16:57.234446 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:16:57.236244 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:16:57.274189 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:16:57.278364 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:16:57.282740 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:16:57.286440 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:16:57.354440 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:16:57.362591 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:16:57.364937 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:16:57.369438 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:16:57.384806 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:16:57.386629 ignition[910]: INFO : Ignition 2.19.0 Feb 13 20:16:57.386629 ignition[910]: INFO : Stage: mount Feb 13 20:16:57.389015 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:57.389015 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:57.389015 ignition[910]: INFO : mount: mount passed Feb 13 20:16:57.389015 ignition[910]: INFO : Ignition finished successfully Feb 13 20:16:57.389376 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:16:57.400538 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:16:57.842265 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:16:57.857669 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:16:57.866244 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (923) Feb 13 20:16:57.866286 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:16:57.866297 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:16:57.867865 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:16:57.870451 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:16:57.871054 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:16:57.894410 ignition[940]: INFO : Ignition 2.19.0 Feb 13 20:16:57.894410 ignition[940]: INFO : Stage: files Feb 13 20:16:57.896122 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:57.896122 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:57.896122 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:16:57.899570 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:16:57.899570 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:16:57.902478 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:16:57.903809 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:16:57.903809 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:16:57.902992 unknown[940]: wrote ssh authorized keys file for user: core Feb 13 20:16:57.907596 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:16:57.907596 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 20:16:57.952747 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:16:58.235807 systemd-networkd[765]: eth0: Gained IPv6LL Feb 13 20:16:58.323018 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:16:58.323018 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:16:58.327560 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:16:58.327560 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:16:58.327560 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:16:58.327560 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:16:58.327560 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:16:58.327560 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:16:58.327560 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:16:58.327560 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:16:58.327560 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:16:58.327560 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:16:58.327560 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:16:58.327560 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:16:58.327560 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 20:16:58.623472 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:16:58.864161 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:16:58.864161 ignition[940]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:16:58.867846 ignition[940]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:16:58.867846 ignition[940]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:16:58.867846 ignition[940]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:16:58.867846 ignition[940]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 20:16:58.867846 ignition[940]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:16:58.867846 ignition[940]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:16:58.867846 ignition[940]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 20:16:58.867846 ignition[940]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 20:16:58.897939 ignition[940]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:16:58.902150 ignition[940]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:16:58.903674 ignition[940]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 20:16:58.903674 ignition[940]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:16:58.903674 ignition[940]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:16:58.903674 ignition[940]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:16:58.903674 ignition[940]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:16:58.903674 ignition[940]: INFO : files: files passed Feb 13 20:16:58.903674 ignition[940]: INFO : Ignition finished successfully Feb 13 20:16:58.907612 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:16:58.917587 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:16:58.919372 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:16:58.921751 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:16:58.921834 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:16:58.929904 initrd-setup-root-after-ignition[968]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 20:16:58.933483 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:58.933483 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:58.937185 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:16:58.936105 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:16:58.938819 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:16:58.954596 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:16:58.974154 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:16:58.974280 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:16:58.976757 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:16:58.978695 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:16:58.980553 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:16:58.981373 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:16:58.997533 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:16:59.005641 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:16:59.014756 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:16:59.016147 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:16:59.018176 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:16:59.019926 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:16:59.020049 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:16:59.022476 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:16:59.024504 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:16:59.026195 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:16:59.027873 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:16:59.029764 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:16:59.031660 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:16:59.033771 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:16:59.035648 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:16:59.037533 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:16:59.039257 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:16:59.040764 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:16:59.040892 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:16:59.043212 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:16:59.045170 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:16:59.047065 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:16:59.048501 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:16:59.050070 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:16:59.050188 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:16:59.053031 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:16:59.053157 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:16:59.055142 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:16:59.056669 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:16:59.057576 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:16:59.058816 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:16:59.060351 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:16:59.062093 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:16:59.062185 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:16:59.064309 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:16:59.064393 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:16:59.065965 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:16:59.066074 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:16:59.067834 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:16:59.067939 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:16:59.079596 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:16:59.081734 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:16:59.082600 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:16:59.082741 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:16:59.084643 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:16:59.084743 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:16:59.092247 ignition[995]: INFO : Ignition 2.19.0 Feb 13 20:16:59.092247 ignition[995]: INFO : Stage: umount Feb 13 20:16:59.094778 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:16:59.094778 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:16:59.094778 ignition[995]: INFO : umount: umount passed Feb 13 20:16:59.094778 ignition[995]: INFO : Ignition finished successfully Feb 13 20:16:59.094303 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:16:59.094846 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:16:59.094934 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:16:59.097504 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:16:59.097607 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:16:59.101502 systemd[1]: Stopped target network.target - Network. Feb 13 20:16:59.103079 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:16:59.103150 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:16:59.105549 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:16:59.105604 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:16:59.107323 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:16:59.107368 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:16:59.109419 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:16:59.109476 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:16:59.113091 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:16:59.114949 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:16:59.125335 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:16:59.125484 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:16:59.127480 systemd-networkd[765]: eth0: DHCPv6 lease lost Feb 13 20:16:59.128126 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:16:59.128185 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:16:59.129778 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:16:59.129876 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:16:59.131872 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:16:59.131929 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:16:59.140557 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:16:59.141451 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:16:59.141526 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:16:59.143655 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:16:59.143706 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:16:59.146871 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:16:59.146926 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:16:59.149183 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:16:59.162780 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:16:59.162877 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:16:59.165289 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:16:59.165376 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:16:59.166778 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:16:59.166895 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:16:59.168802 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:16:59.168886 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:16:59.171131 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:16:59.171195 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:16:59.172991 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:16:59.173027 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:16:59.175068 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:16:59.175119 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:16:59.177686 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:16:59.177738 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:16:59.180380 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:16:59.180441 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:16:59.191589 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:16:59.192646 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:16:59.192711 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:16:59.194812 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:16:59.194861 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:16:59.196910 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:16:59.196968 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:16:59.199128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:16:59.199175 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:16:59.201495 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:16:59.201579 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:16:59.203890 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:16:59.206061 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:16:59.215292 systemd[1]: Switching root. Feb 13 20:16:59.238493 systemd-journald[238]: Journal stopped Feb 13 20:16:59.933624 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 20:16:59.933681 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:16:59.933694 kernel: SELinux: policy capability open_perms=1 Feb 13 20:16:59.933704 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:16:59.933713 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:16:59.933723 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:16:59.933732 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:16:59.933742 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:16:59.933755 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:16:59.933765 kernel: audit: type=1403 audit(1739477819.375:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:16:59.933776 systemd[1]: Successfully loaded SELinux policy in 33.987ms. Feb 13 20:16:59.933795 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.856ms. Feb 13 20:16:59.933807 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:16:59.933823 systemd[1]: Detected virtualization kvm. Feb 13 20:16:59.933841 systemd[1]: Detected architecture arm64. Feb 13 20:16:59.933852 systemd[1]: Detected first boot. Feb 13 20:16:59.933863 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:16:59.933876 zram_generator::config[1038]: No configuration found. Feb 13 20:16:59.933887 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:16:59.933898 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:16:59.933909 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:16:59.933919 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:16:59.933934 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:16:59.933945 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:16:59.933955 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:16:59.933971 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:16:59.933982 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:16:59.933998 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:16:59.934008 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:16:59.934019 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:16:59.934030 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:16:59.934041 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:16:59.934052 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:16:59.934062 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:16:59.934075 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:16:59.934086 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:16:59.934096 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 20:16:59.934107 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:16:59.934117 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:16:59.934127 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:16:59.934138 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:16:59.934148 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:16:59.934160 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:16:59.934173 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:16:59.934184 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:16:59.934194 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:16:59.934210 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:16:59.934220 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:16:59.934232 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:16:59.934249 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:16:59.934262 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:16:59.934276 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:16:59.934293 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:16:59.934303 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:16:59.934314 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:16:59.934324 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:16:59.934335 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:16:59.934345 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:16:59.934356 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:16:59.934366 systemd[1]: Reached target machines.target - Containers. Feb 13 20:16:59.934379 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:16:59.934390 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:16:59.934400 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:16:59.934411 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:16:59.934430 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:16:59.934445 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:16:59.934457 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:16:59.934467 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:16:59.934480 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:16:59.934492 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:16:59.934502 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:16:59.934513 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:16:59.934523 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:16:59.934533 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:16:59.934543 kernel: fuse: init (API version 7.39) Feb 13 20:16:59.934552 kernel: loop: module loaded Feb 13 20:16:59.934562 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:16:59.934574 kernel: ACPI: bus type drm_connector registered Feb 13 20:16:59.934584 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:16:59.934595 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:16:59.934605 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:16:59.934616 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:16:59.934627 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:16:59.934637 systemd[1]: Stopped verity-setup.service. Feb 13 20:16:59.934647 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:16:59.934676 systemd-journald[1098]: Collecting audit messages is disabled. Feb 13 20:16:59.934700 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:16:59.934711 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:16:59.934722 systemd-journald[1098]: Journal started Feb 13 20:16:59.934742 systemd-journald[1098]: Runtime Journal (/run/log/journal/8a8af9f6106b400981c296365c9d642b) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:16:59.726491 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:16:59.745993 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:16:59.746353 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:16:59.937500 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:16:59.937409 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:16:59.938642 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:16:59.939853 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:16:59.942463 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:16:59.943959 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:16:59.944094 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:16:59.945644 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:16:59.945776 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:16:59.947283 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:16:59.947575 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:16:59.948868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:16:59.949009 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:16:59.950566 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:16:59.950693 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:16:59.951986 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:16:59.952110 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:16:59.953573 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:16:59.955183 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:16:59.956913 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:16:59.959806 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:16:59.970487 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:16:59.977515 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:16:59.979583 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:16:59.980685 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:16:59.980723 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:16:59.982690 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:16:59.984903 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:16:59.987070 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:16:59.988188 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:16:59.990009 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:16:59.992087 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:16:59.993371 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:16:59.995559 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:16:59.996716 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:16:59.999640 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:17:00.000347 systemd-journald[1098]: Time spent on flushing to /var/log/journal/8a8af9f6106b400981c296365c9d642b is 14.481ms for 854 entries. Feb 13 20:17:00.000347 systemd-journald[1098]: System Journal (/var/log/journal/8a8af9f6106b400981c296365c9d642b) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:17:00.022500 systemd-journald[1098]: Received client request to flush runtime journal. Feb 13 20:17:00.002766 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:17:00.005528 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:17:00.008314 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:17:00.009724 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:17:00.011728 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:17:00.013397 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:17:00.018199 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:17:00.020409 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:17:00.025476 kernel: loop0: detected capacity change from 0 to 114328 Feb 13 20:17:00.033685 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:17:00.037670 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:17:00.049548 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Feb 13 20:17:00.049564 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Feb 13 20:17:00.051609 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:17:00.055470 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:17:00.057193 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:17:00.059118 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:17:00.064454 kernel: loop1: detected capacity change from 0 to 114432 Feb 13 20:17:00.066209 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:17:00.066831 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:17:00.074703 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:17:00.076107 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:17:00.094717 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:17:00.101484 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 20:17:00.103616 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:17:00.115412 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Feb 13 20:17:00.115782 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Feb 13 20:17:00.119926 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:17:00.157559 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 20:17:00.162576 kernel: loop4: detected capacity change from 0 to 114432 Feb 13 20:17:00.167476 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 20:17:00.171993 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 20:17:00.172397 (sd-merge)[1178]: Merged extensions into '/usr'. Feb 13 20:17:00.175508 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:17:00.175524 systemd[1]: Reloading... Feb 13 20:17:00.227473 zram_generator::config[1202]: No configuration found. Feb 13 20:17:00.280481 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:17:00.323686 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:17:00.359745 systemd[1]: Reloading finished in 183 ms. Feb 13 20:17:00.390341 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:17:00.391932 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:17:00.404585 systemd[1]: Starting ensure-sysext.service... Feb 13 20:17:00.406778 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:17:00.414363 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:17:00.414376 systemd[1]: Reloading... Feb 13 20:17:00.422774 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:17:00.423030 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:17:00.423689 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:17:00.423907 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Feb 13 20:17:00.423958 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Feb 13 20:17:00.426119 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:17:00.426131 systemd-tmpfiles[1240]: Skipping /boot Feb 13 20:17:00.433186 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:17:00.433201 systemd-tmpfiles[1240]: Skipping /boot Feb 13 20:17:00.458443 zram_generator::config[1267]: No configuration found. Feb 13 20:17:00.541215 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:17:00.576978 systemd[1]: Reloading finished in 162 ms. Feb 13 20:17:00.591407 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:17:00.603935 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:17:00.611382 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:17:00.614054 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:17:00.616729 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:17:00.623853 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:17:00.626804 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:17:00.630630 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:17:00.634035 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:17:00.638772 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:17:00.640886 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:17:00.646717 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:17:00.647852 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:17:00.649890 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:17:00.651703 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:17:00.651827 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:17:00.653373 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:17:00.653516 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:17:00.655199 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:17:00.658999 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:17:00.659132 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:17:00.667149 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:17:00.675030 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:17:00.676566 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Feb 13 20:17:00.679815 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:17:00.682711 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:17:00.685559 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:17:00.690609 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:17:00.692552 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:17:00.694316 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:17:00.694505 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:17:00.698119 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:17:00.699905 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:17:00.700029 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:17:00.701949 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:17:00.702076 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:17:00.703548 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:17:00.711513 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:17:00.722595 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:17:00.724009 systemd[1]: Finished ensure-sysext.service. Feb 13 20:17:00.727118 augenrules[1357]: No rules Feb 13 20:17:00.728486 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:17:00.736689 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:17:00.746665 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:17:00.751616 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:17:00.758340 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:17:00.762911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:17:00.764448 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1369) Feb 13 20:17:00.766644 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:17:00.770630 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:17:00.774969 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:17:00.777327 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:17:00.777773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:17:00.777928 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:17:00.781332 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 20:17:00.790216 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:17:00.790393 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:17:00.793372 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:17:00.794615 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:17:00.796306 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:17:00.797105 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:17:00.806031 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:17:00.806103 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:17:00.853839 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:17:00.856938 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:17:00.860947 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:17:00.862349 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:17:00.866290 systemd-resolved[1307]: Positive Trust Anchors: Feb 13 20:17:00.866301 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:17:00.866333 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:17:00.869134 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:17:00.881049 systemd-resolved[1307]: Defaulting to hostname 'linux'. Feb 13 20:17:00.888407 systemd-networkd[1379]: lo: Link UP Feb 13 20:17:00.888414 systemd-networkd[1379]: lo: Gained carrier Feb 13 20:17:00.889286 systemd-networkd[1379]: Enumeration completed Feb 13 20:17:00.889508 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:17:00.889911 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:17:00.889915 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:17:00.890599 systemd-networkd[1379]: eth0: Link UP Feb 13 20:17:00.890602 systemd-networkd[1379]: eth0: Gained carrier Feb 13 20:17:00.890615 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:17:00.890895 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:17:00.892355 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:17:00.894766 systemd[1]: Reached target network.target - Network. Feb 13 20:17:00.895933 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:17:00.906634 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:17:00.910875 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:17:00.912488 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:17:00.923483 systemd-networkd[1379]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:17:00.924184 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Feb 13 20:17:00.925921 systemd-timesyncd[1381]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 20:17:00.925985 systemd-timesyncd[1381]: Initial clock synchronization to Thu 2025-02-13 20:17:01.169318 UTC. Feb 13 20:17:00.930026 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:17:00.933603 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:17:00.975961 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:17:00.979523 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:17:00.980640 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:17:00.981777 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:17:00.983014 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:17:00.984614 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:17:00.985750 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:17:00.986965 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:17:00.988203 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:17:00.988252 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:17:00.989162 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:17:00.990918 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:17:00.993458 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:17:01.001499 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:17:01.003823 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:17:01.005479 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:17:01.006684 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:17:01.007686 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:17:01.008705 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:17:01.008739 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:17:01.009663 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:17:01.011735 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:17:01.014615 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:17:01.014982 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:17:01.017668 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:17:01.019043 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:17:01.020634 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:17:01.023564 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:17:01.026615 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:17:01.027718 jq[1411]: false Feb 13 20:17:01.030658 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:17:01.037689 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:17:01.040241 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:17:01.040986 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:17:01.043139 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:17:01.044028 extend-filesystems[1412]: Found loop3 Feb 13 20:17:01.047376 extend-filesystems[1412]: Found loop4 Feb 13 20:17:01.047376 extend-filesystems[1412]: Found loop5 Feb 13 20:17:01.047376 extend-filesystems[1412]: Found vda Feb 13 20:17:01.047376 extend-filesystems[1412]: Found vda1 Feb 13 20:17:01.047376 extend-filesystems[1412]: Found vda2 Feb 13 20:17:01.047376 extend-filesystems[1412]: Found vda3 Feb 13 20:17:01.047376 extend-filesystems[1412]: Found usr Feb 13 20:17:01.047376 extend-filesystems[1412]: Found vda4 Feb 13 20:17:01.047376 extend-filesystems[1412]: Found vda6 Feb 13 20:17:01.047376 extend-filesystems[1412]: Found vda7 Feb 13 20:17:01.047376 extend-filesystems[1412]: Found vda9 Feb 13 20:17:01.047376 extend-filesystems[1412]: Checking size of /dev/vda9 Feb 13 20:17:01.063592 dbus-daemon[1410]: [system] SELinux support is enabled Feb 13 20:17:01.070706 extend-filesystems[1412]: Resized partition /dev/vda9 Feb 13 20:17:01.048410 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:17:01.051203 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:17:01.072776 jq[1425]: true Feb 13 20:17:01.054433 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:17:01.054657 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:17:01.054928 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:17:01.055066 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:17:01.063722 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:17:01.067226 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:17:01.067426 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:17:01.079655 extend-filesystems[1435]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:17:01.080931 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:17:01.080966 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:17:01.081047 (ntainerd)[1437]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:17:01.085478 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1365) Feb 13 20:17:01.085521 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 20:17:01.087633 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:17:01.087666 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:17:01.101388 tar[1431]: linux-arm64/helm Feb 13 20:17:01.105651 jq[1436]: true Feb 13 20:17:01.153542 update_engine[1424]: I20250213 20:17:01.152946 1424 main.cc:92] Flatcar Update Engine starting Feb 13 20:17:01.166349 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:17:01.166633 update_engine[1424]: I20250213 20:17:01.166353 1424 update_check_scheduler.cc:74] Next update check in 6m49s Feb 13 20:17:01.183054 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:17:01.184162 systemd-logind[1420]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 20:17:01.184576 systemd-logind[1420]: New seat seat0. Feb 13 20:17:01.185299 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:17:01.200503 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 20:17:01.240048 extend-filesystems[1435]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:17:01.240048 extend-filesystems[1435]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:17:01.240048 extend-filesystems[1435]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 20:17:01.246431 extend-filesystems[1412]: Resized filesystem in /dev/vda9 Feb 13 20:17:01.248838 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:17:01.251531 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:17:01.256111 bash[1464]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:17:01.263180 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:17:01.266427 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:17:01.272994 locksmithd[1465]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:17:01.396580 containerd[1437]: time="2025-02-13T20:17:01.396393562Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:17:01.426880 containerd[1437]: time="2025-02-13T20:17:01.426464824Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:17:01.428180 containerd[1437]: time="2025-02-13T20:17:01.428143240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:17:01.428686 containerd[1437]: time="2025-02-13T20:17:01.428314870Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:17:01.428686 containerd[1437]: time="2025-02-13T20:17:01.428342898Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:17:01.428686 containerd[1437]: time="2025-02-13T20:17:01.428534105Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:17:01.428686 containerd[1437]: time="2025-02-13T20:17:01.428553766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:17:01.428686 containerd[1437]: time="2025-02-13T20:17:01.428606937Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:17:01.428686 containerd[1437]: time="2025-02-13T20:17:01.428621363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:17:01.429188 containerd[1437]: time="2025-02-13T20:17:01.429161806Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:17:01.429331 containerd[1437]: time="2025-02-13T20:17:01.429313115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:17:01.429801 containerd[1437]: time="2025-02-13T20:17:01.429381495Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:17:01.429801 containerd[1437]: time="2025-02-13T20:17:01.429398106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:17:01.429801 containerd[1437]: time="2025-02-13T20:17:01.429526828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:17:01.429801 containerd[1437]: time="2025-02-13T20:17:01.429743096Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:17:01.430226 containerd[1437]: time="2025-02-13T20:17:01.430201475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:17:01.430364 containerd[1437]: time="2025-02-13T20:17:01.430345695Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:17:01.430584 containerd[1437]: time="2025-02-13T20:17:01.430562622Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:17:01.430817 containerd[1437]: time="2025-02-13T20:17:01.430741959Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:17:01.434408 containerd[1437]: time="2025-02-13T20:17:01.434375071Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:17:01.434686 containerd[1437]: time="2025-02-13T20:17:01.434584209Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:17:01.434686 containerd[1437]: time="2025-02-13T20:17:01.434617512Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:17:01.435015 containerd[1437]: time="2025-02-13T20:17:01.434803898Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:17:01.435015 containerd[1437]: time="2025-02-13T20:17:01.434830689Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:17:01.435015 containerd[1437]: time="2025-02-13T20:17:01.434960153Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:17:01.435551 containerd[1437]: time="2025-02-13T20:17:01.435526811Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:17:01.436444 containerd[1437]: time="2025-02-13T20:17:01.435774651Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:17:01.436444 containerd[1437]: time="2025-02-13T20:17:01.435799711Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:17:01.436444 containerd[1437]: time="2025-02-13T20:17:01.435814714Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:17:01.436444 containerd[1437]: time="2025-02-13T20:17:01.435829594Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:17:01.436444 containerd[1437]: time="2025-02-13T20:17:01.435844927Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:17:01.436444 containerd[1437]: time="2025-02-13T20:17:01.435858034Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:17:01.436444 containerd[1437]: time="2025-02-13T20:17:01.435872543Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:17:01.436444 containerd[1437]: time="2025-02-13T20:17:01.435886433Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:17:01.436444 containerd[1437]: time="2025-02-13T20:17:01.435904651Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:17:01.436444 containerd[1437]: time="2025-02-13T20:17:01.435918253Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:17:01.436444 containerd[1437]: time="2025-02-13T20:17:01.435940057Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:17:01.436444 containerd[1437]: time="2025-02-13T20:17:01.435968909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:17:01.436444 containerd[1437]: time="2025-02-13T20:17:01.435983912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:17:01.436444 containerd[1437]: time="2025-02-13T20:17:01.435997102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:17:01.436754 containerd[1437]: time="2025-02-13T20:17:01.436009508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:17:01.436754 containerd[1437]: time="2025-02-13T20:17:01.436022863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:17:01.436754 containerd[1437]: time="2025-02-13T20:17:01.436036217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:17:01.436754 containerd[1437]: time="2025-02-13T20:17:01.436048788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:17:01.436754 containerd[1437]: time="2025-02-13T20:17:01.436062266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:17:01.436754 containerd[1437]: time="2025-02-13T20:17:01.436078424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:17:01.436754 containerd[1437]: time="2025-02-13T20:17:01.436096518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:17:01.436754 containerd[1437]: time="2025-02-13T20:17:01.436121949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:17:01.436754 containerd[1437]: time="2025-02-13T20:17:01.436137900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:17:01.436754 containerd[1437]: time="2025-02-13T20:17:01.436151173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:17:01.436754 containerd[1437]: time="2025-02-13T20:17:01.436167948Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:17:01.436754 containerd[1437]: time="2025-02-13T20:17:01.436191730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:17:01.436754 containerd[1437]: time="2025-02-13T20:17:01.436215925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:17:01.436754 containerd[1437]: time="2025-02-13T20:17:01.436228991Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:17:01.437648 containerd[1437]: time="2025-02-13T20:17:01.437617772Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:17:01.437761 containerd[1437]: time="2025-02-13T20:17:01.437734706Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:17:01.437817 containerd[1437]: time="2025-02-13T20:17:01.437804116Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:17:01.437897 containerd[1437]: time="2025-02-13T20:17:01.437879626Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:17:01.437950 containerd[1437]: time="2025-02-13T20:17:01.437937042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:17:01.438002 containerd[1437]: time="2025-02-13T20:17:01.437989430Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:17:01.438051 containerd[1437]: time="2025-02-13T20:17:01.438038602Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:17:01.438102 containerd[1437]: time="2025-02-13T20:17:01.438090371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:17:01.440360 containerd[1437]: time="2025-02-13T20:17:01.440254988Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:17:01.440728 containerd[1437]: time="2025-02-13T20:17:01.440629201Z" level=info msg="Connect containerd service" Feb 13 20:17:01.440728 containerd[1437]: time="2025-02-13T20:17:01.440684144Z" level=info msg="using legacy CRI server" Feb 13 20:17:01.440728 containerd[1437]: time="2025-02-13T20:17:01.440693665Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:17:01.441434 containerd[1437]: time="2025-02-13T20:17:01.440900700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:17:01.441720 containerd[1437]: time="2025-02-13T20:17:01.441690757Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:17:01.442098 containerd[1437]: time="2025-02-13T20:17:01.442024618Z" level=info msg="Start subscribing containerd event" Feb 13 20:17:01.442153 containerd[1437]: time="2025-02-13T20:17:01.442104662Z" level=info msg="Start recovering state" Feb 13 20:17:01.442187 containerd[1437]: time="2025-02-13T20:17:01.442169868Z" level=info msg="Start event monitor" Feb 13 20:17:01.442212 containerd[1437]: time="2025-02-13T20:17:01.442185696Z" level=info msg="Start snapshots syncer" Feb 13 20:17:01.442212 containerd[1437]: time="2025-02-13T20:17:01.442195588Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:17:01.442212 containerd[1437]: time="2025-02-13T20:17:01.442203337Z" level=info msg="Start streaming server" Feb 13 20:17:01.442612 containerd[1437]: time="2025-02-13T20:17:01.442588431Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:17:01.442736 containerd[1437]: time="2025-02-13T20:17:01.442719833Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:17:01.442949 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:17:01.444574 containerd[1437]: time="2025-02-13T20:17:01.444551000Z" level=info msg="containerd successfully booted in 0.049045s" Feb 13 20:17:01.515882 tar[1431]: linux-arm64/LICENSE Feb 13 20:17:01.516083 tar[1431]: linux-arm64/README.md Feb 13 20:17:01.526973 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:17:02.364218 sshd_keygen[1434]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:17:02.383309 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:17:02.400736 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:17:02.406663 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:17:02.408476 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:17:02.411792 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:17:02.423921 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:17:02.435718 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:17:02.438016 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 20:17:02.439380 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:17:02.587944 systemd-networkd[1379]: eth0: Gained IPv6LL Feb 13 20:17:02.591548 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:17:02.593324 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:17:02.605867 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 20:17:02.608322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:17:02.610488 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:17:02.625527 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 20:17:02.625757 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 20:17:02.627477 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:17:02.627834 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:17:03.113223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:17:03.114840 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:17:03.117064 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:17:03.119828 systemd[1]: Startup finished in 562ms (kernel) + 4.671s (initrd) + 3.778s (userspace) = 9.012s. Feb 13 20:17:03.586384 kubelet[1523]: E0213 20:17:03.586285 1523 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:17:03.589139 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:17:03.589299 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:17:07.959108 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:17:07.960241 systemd[1]: Started sshd@0-10.0.0.10:22-10.0.0.1:44106.service - OpenSSH per-connection server daemon (10.0.0.1:44106). Feb 13 20:17:08.015509 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 44106 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:08.019060 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:08.028115 systemd-logind[1420]: New session 1 of user core. Feb 13 20:17:08.029161 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:17:08.036679 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:17:08.047478 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:17:08.049756 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:17:08.056743 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:17:08.137858 systemd[1541]: Queued start job for default target default.target. Feb 13 20:17:08.148490 systemd[1541]: Created slice app.slice - User Application Slice. Feb 13 20:17:08.148534 systemd[1541]: Reached target paths.target - Paths. Feb 13 20:17:08.148547 systemd[1541]: Reached target timers.target - Timers. Feb 13 20:17:08.149870 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:17:08.159780 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:17:08.159844 systemd[1541]: Reached target sockets.target - Sockets. Feb 13 20:17:08.159867 systemd[1541]: Reached target basic.target - Basic System. Feb 13 20:17:08.159907 systemd[1541]: Reached target default.target - Main User Target. Feb 13 20:17:08.159933 systemd[1541]: Startup finished in 97ms. Feb 13 20:17:08.160174 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:17:08.161907 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:17:08.223943 systemd[1]: Started sshd@1-10.0.0.10:22-10.0.0.1:44110.service - OpenSSH per-connection server daemon (10.0.0.1:44110). Feb 13 20:17:08.261178 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 44110 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:08.262533 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:08.268144 systemd-logind[1420]: New session 2 of user core. Feb 13 20:17:08.284678 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:17:08.337882 sshd[1552]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:08.348826 systemd[1]: sshd@1-10.0.0.10:22-10.0.0.1:44110.service: Deactivated successfully. Feb 13 20:17:08.352284 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:17:08.353627 systemd-logind[1420]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:17:08.354749 systemd[1]: Started sshd@2-10.0.0.10:22-10.0.0.1:44112.service - OpenSSH per-connection server daemon (10.0.0.1:44112). Feb 13 20:17:08.355468 systemd-logind[1420]: Removed session 2. Feb 13 20:17:08.392319 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 44112 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:08.393693 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:08.397687 systemd-logind[1420]: New session 3 of user core. Feb 13 20:17:08.407610 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:17:08.457316 sshd[1559]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:08.467887 systemd[1]: sshd@2-10.0.0.10:22-10.0.0.1:44112.service: Deactivated successfully. Feb 13 20:17:08.469392 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:17:08.470728 systemd-logind[1420]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:17:08.471978 systemd[1]: Started sshd@3-10.0.0.10:22-10.0.0.1:44122.service - OpenSSH per-connection server daemon (10.0.0.1:44122). Feb 13 20:17:08.472771 systemd-logind[1420]: Removed session 3. Feb 13 20:17:08.509802 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 44122 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:08.511154 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:08.514894 systemd-logind[1420]: New session 4 of user core. Feb 13 20:17:08.530614 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:17:08.582623 sshd[1566]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:08.590741 systemd[1]: sshd@3-10.0.0.10:22-10.0.0.1:44122.service: Deactivated successfully. Feb 13 20:17:08.592229 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:17:08.593482 systemd-logind[1420]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:17:08.594622 systemd[1]: Started sshd@4-10.0.0.10:22-10.0.0.1:44134.service - OpenSSH per-connection server daemon (10.0.0.1:44134). Feb 13 20:17:08.595286 systemd-logind[1420]: Removed session 4. Feb 13 20:17:08.640394 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 44134 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:08.641701 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:08.645447 systemd-logind[1420]: New session 5 of user core. Feb 13 20:17:08.657590 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:17:08.725075 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:17:08.725355 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:17:09.042695 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:17:09.042800 (dockerd)[1594]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:17:09.303946 dockerd[1594]: time="2025-02-13T20:17:09.303487108Z" level=info msg="Starting up" Feb 13 20:17:09.446495 dockerd[1594]: time="2025-02-13T20:17:09.446340712Z" level=info msg="Loading containers: start." Feb 13 20:17:09.527457 kernel: Initializing XFRM netlink socket Feb 13 20:17:09.595707 systemd-networkd[1379]: docker0: Link UP Feb 13 20:17:09.623958 dockerd[1594]: time="2025-02-13T20:17:09.623889569Z" level=info msg="Loading containers: done." Feb 13 20:17:09.640523 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3937773060-merged.mount: Deactivated successfully. Feb 13 20:17:09.655389 dockerd[1594]: time="2025-02-13T20:17:09.655324033Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:17:09.655515 dockerd[1594]: time="2025-02-13T20:17:09.655467478Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:17:09.655626 dockerd[1594]: time="2025-02-13T20:17:09.655595927Z" level=info msg="Daemon has completed initialization" Feb 13 20:17:09.689451 dockerd[1594]: time="2025-02-13T20:17:09.689305903Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:17:09.689658 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:17:10.465808 containerd[1437]: time="2025-02-13T20:17:10.465709918Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 20:17:11.120584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2664989543.mount: Deactivated successfully. Feb 13 20:17:12.135319 containerd[1437]: time="2025-02-13T20:17:12.135261222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:12.135752 containerd[1437]: time="2025-02-13T20:17:12.135704829Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 20:17:12.136801 containerd[1437]: time="2025-02-13T20:17:12.136749457Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:12.139831 containerd[1437]: time="2025-02-13T20:17:12.139783123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:12.141261 containerd[1437]: time="2025-02-13T20:17:12.141106531Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 1.675351871s" Feb 13 20:17:12.141261 containerd[1437]: time="2025-02-13T20:17:12.141149349Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 20:17:12.162086 containerd[1437]: time="2025-02-13T20:17:12.162009682Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 20:17:13.646957 containerd[1437]: time="2025-02-13T20:17:13.646878009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:13.647532 containerd[1437]: time="2025-02-13T20:17:13.647463214Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 20:17:13.648365 containerd[1437]: time="2025-02-13T20:17:13.648335690Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:13.651479 containerd[1437]: time="2025-02-13T20:17:13.651404992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:13.652725 containerd[1437]: time="2025-02-13T20:17:13.652681047Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.490630891s" Feb 13 20:17:13.652793 containerd[1437]: time="2025-02-13T20:17:13.652725437Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 20:17:13.671022 containerd[1437]: time="2025-02-13T20:17:13.670977045Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 20:17:13.741414 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:17:13.750619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:17:13.839788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:17:13.844077 (kubelet)[1829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:17:13.887335 kubelet[1829]: E0213 20:17:13.887265 1829 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:17:13.890627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:17:13.890787 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:17:14.725079 containerd[1437]: time="2025-02-13T20:17:14.724885852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:14.725884 containerd[1437]: time="2025-02-13T20:17:14.725856029Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 20:17:14.726598 containerd[1437]: time="2025-02-13T20:17:14.726569395Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:14.729555 containerd[1437]: time="2025-02-13T20:17:14.729520822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:14.730883 containerd[1437]: time="2025-02-13T20:17:14.730849592Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.059833436s" Feb 13 20:17:14.730883 containerd[1437]: time="2025-02-13T20:17:14.730882728Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 20:17:14.749259 containerd[1437]: time="2025-02-13T20:17:14.749151280Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:17:15.734649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount347297979.mount: Deactivated successfully. Feb 13 20:17:15.928734 containerd[1437]: time="2025-02-13T20:17:15.928689478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:15.929844 containerd[1437]: time="2025-02-13T20:17:15.929661057Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 20:17:15.930569 containerd[1437]: time="2025-02-13T20:17:15.930533975Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:15.936239 containerd[1437]: time="2025-02-13T20:17:15.936172154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:15.937267 containerd[1437]: time="2025-02-13T20:17:15.937231463Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.188043233s" Feb 13 20:17:15.937445 containerd[1437]: time="2025-02-13T20:17:15.937359501Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 20:17:15.955881 containerd[1437]: time="2025-02-13T20:17:15.955843371Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:17:16.476660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4015294304.mount: Deactivated successfully. Feb 13 20:17:17.208474 containerd[1437]: time="2025-02-13T20:17:17.208402227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:17.209004 containerd[1437]: time="2025-02-13T20:17:17.208970783Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 20:17:17.209932 containerd[1437]: time="2025-02-13T20:17:17.209905329Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:17.213164 containerd[1437]: time="2025-02-13T20:17:17.213101373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:17.217013 containerd[1437]: time="2025-02-13T20:17:17.216972073Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.261081976s" Feb 13 20:17:17.217013 containerd[1437]: time="2025-02-13T20:17:17.217014063Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 20:17:17.235102 containerd[1437]: time="2025-02-13T20:17:17.235057069Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 20:17:17.757317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount224154367.mount: Deactivated successfully. Feb 13 20:17:17.761890 containerd[1437]: time="2025-02-13T20:17:17.761841300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:17.762697 containerd[1437]: time="2025-02-13T20:17:17.762657020Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 20:17:17.763500 containerd[1437]: time="2025-02-13T20:17:17.763468486Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:17.765943 containerd[1437]: time="2025-02-13T20:17:17.765907539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:17.767048 containerd[1437]: time="2025-02-13T20:17:17.767011292Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 531.911108ms" Feb 13 20:17:17.767096 containerd[1437]: time="2025-02-13T20:17:17.767048746Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 20:17:17.785826 containerd[1437]: time="2025-02-13T20:17:17.785795471Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 20:17:18.302970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1424180572.mount: Deactivated successfully. Feb 13 20:17:19.975811 containerd[1437]: time="2025-02-13T20:17:19.975758944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:19.976772 containerd[1437]: time="2025-02-13T20:17:19.976598929Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 20:17:19.977390 containerd[1437]: time="2025-02-13T20:17:19.977358012Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:19.980625 containerd[1437]: time="2025-02-13T20:17:19.980560840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:17:19.981892 containerd[1437]: time="2025-02-13T20:17:19.981851581Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.19602027s" Feb 13 20:17:19.981948 containerd[1437]: time="2025-02-13T20:17:19.981892373Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 20:17:23.991371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:17:24.000622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:17:24.091995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:17:24.095537 (kubelet)[2050]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:17:24.131149 kubelet[2050]: E0213 20:17:24.131077 2050 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:17:24.133754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:17:24.134002 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:17:25.126106 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:17:25.139642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:17:25.154753 systemd[1]: Reloading requested from client PID 2065 ('systemctl') (unit session-5.scope)... Feb 13 20:17:25.154770 systemd[1]: Reloading... Feb 13 20:17:25.220498 zram_generator::config[2104]: No configuration found. Feb 13 20:17:25.333801 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:17:25.385886 systemd[1]: Reloading finished in 230 ms. Feb 13 20:17:25.420115 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:17:25.420181 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:17:25.420402 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:17:25.421877 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:17:25.511745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:17:25.516339 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:17:25.556527 kubelet[2149]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:17:25.557016 kubelet[2149]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:17:25.557016 kubelet[2149]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:17:25.558498 kubelet[2149]: I0213 20:17:25.557777 2149 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:17:26.335660 kubelet[2149]: I0213 20:17:26.335616 2149 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:17:26.335660 kubelet[2149]: I0213 20:17:26.335648 2149 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:17:26.335871 kubelet[2149]: I0213 20:17:26.335855 2149 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:17:26.365706 kubelet[2149]: I0213 20:17:26.365669 2149 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:17:26.365810 kubelet[2149]: E0213 20:17:26.365787 2149 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.10:6443: connect: connection refused Feb 13 20:17:26.374163 kubelet[2149]: I0213 20:17:26.374134 2149 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:17:26.375565 kubelet[2149]: I0213 20:17:26.374533 2149 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:17:26.375565 kubelet[2149]: I0213 20:17:26.374567 2149 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:17:26.375565 kubelet[2149]: I0213 20:17:26.374910 2149 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:17:26.375565 kubelet[2149]: I0213 20:17:26.374927 2149 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:17:26.375565 kubelet[2149]: I0213 20:17:26.375186 2149 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:17:26.376774 kubelet[2149]: I0213 20:17:26.376738 2149 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:17:26.376774 kubelet[2149]: I0213 20:17:26.376775 2149 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:17:26.376948 kubelet[2149]: I0213 20:17:26.376931 2149 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:17:26.377204 kubelet[2149]: I0213 20:17:26.377012 2149 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:17:26.377204 kubelet[2149]: W0213 20:17:26.377093 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Feb 13 20:17:26.377204 kubelet[2149]: E0213 20:17:26.377150 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Feb 13 20:17:26.377584 kubelet[2149]: W0213 20:17:26.377543 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Feb 13 20:17:26.377625 kubelet[2149]: E0213 20:17:26.377589 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Feb 13 20:17:26.378460 kubelet[2149]: I0213 20:17:26.378439 2149 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:17:26.378820 kubelet[2149]: I0213 20:17:26.378805 2149 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:17:26.378923 kubelet[2149]: W0213 20:17:26.378909 2149 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:17:26.380024 kubelet[2149]: I0213 20:17:26.379745 2149 server.go:1264] "Started kubelet" Feb 13 20:17:26.380600 kubelet[2149]: I0213 20:17:26.380548 2149 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:17:26.381933 kubelet[2149]: I0213 20:17:26.381908 2149 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:17:26.382547 kubelet[2149]: I0213 20:17:26.382487 2149 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:17:26.382842 kubelet[2149]: I0213 20:17:26.382747 2149 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:17:26.383017 kubelet[2149]: E0213 20:17:26.382742 2149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dde2981fe094 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:17:26.379716756 +0000 UTC m=+0.860272359,LastTimestamp:2025-02-13 20:17:26.379716756 +0000 UTC m=+0.860272359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:17:26.385454 kubelet[2149]: I0213 20:17:26.384236 2149 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:17:26.385454 kubelet[2149]: I0213 20:17:26.384484 2149 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:17:26.385454 kubelet[2149]: I0213 20:17:26.384577 2149 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:17:26.385454 kubelet[2149]: I0213 20:17:26.384630 2149 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:17:26.385454 kubelet[2149]: W0213 20:17:26.384923 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Feb 13 20:17:26.385454 kubelet[2149]: E0213 20:17:26.384965 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Feb 13 20:17:26.385454 kubelet[2149]: E0213 20:17:26.385395 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="200ms" Feb 13 20:17:26.386348 kubelet[2149]: I0213 20:17:26.386315 2149 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:17:26.387811 kubelet[2149]: E0213 20:17:26.387770 2149 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:17:26.387843 kubelet[2149]: I0213 20:17:26.387824 2149 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:17:26.387843 kubelet[2149]: I0213 20:17:26.387839 2149 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:17:26.399450 kubelet[2149]: I0213 20:17:26.399395 2149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:17:26.400514 kubelet[2149]: I0213 20:17:26.400479 2149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:17:26.400654 kubelet[2149]: I0213 20:17:26.400635 2149 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:17:26.400654 kubelet[2149]: I0213 20:17:26.400653 2149 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:17:26.400704 kubelet[2149]: E0213 20:17:26.400691 2149 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:17:26.403210 kubelet[2149]: W0213 20:17:26.403104 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Feb 13 20:17:26.403210 kubelet[2149]: E0213 20:17:26.403160 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Feb 13 20:17:26.403635 kubelet[2149]: I0213 20:17:26.403617 2149 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:17:26.403635 kubelet[2149]: I0213 20:17:26.403630 2149 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:17:26.403711 kubelet[2149]: I0213 20:17:26.403647 2149 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:17:26.464573 kubelet[2149]: I0213 20:17:26.464531 2149 policy_none.go:49] "None policy: Start" Feb 13 20:17:26.465609 kubelet[2149]: I0213 20:17:26.465583 2149 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:17:26.465679 kubelet[2149]: I0213 20:17:26.465615 2149 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:17:26.473054 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:17:26.486438 kubelet[2149]: I0213 20:17:26.486387 2149 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:17:26.487189 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:17:26.487752 kubelet[2149]: E0213 20:17:26.487709 2149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Feb 13 20:17:26.491917 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:17:26.501651 kubelet[2149]: E0213 20:17:26.501601 2149 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:17:26.508242 kubelet[2149]: I0213 20:17:26.508202 2149 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:17:26.508442 kubelet[2149]: I0213 20:17:26.508389 2149 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:17:26.508549 kubelet[2149]: I0213 20:17:26.508507 2149 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:17:26.510244 kubelet[2149]: E0213 20:17:26.510224 2149 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 20:17:26.586000 kubelet[2149]: E0213 20:17:26.585859 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="400ms" Feb 13 20:17:26.689758 kubelet[2149]: I0213 20:17:26.689708 2149 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:17:26.690062 kubelet[2149]: E0213 20:17:26.690020 2149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Feb 13 20:17:26.702258 kubelet[2149]: I0213 20:17:26.702207 2149 topology_manager.go:215] "Topology Admit Handler" podUID="72047ab8f55fc4b47dec83af416ed460" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 20:17:26.703088 kubelet[2149]: I0213 20:17:26.703039 2149 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 20:17:26.704120 kubelet[2149]: I0213 20:17:26.703803 2149 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 20:17:26.709577 systemd[1]: Created slice kubepods-burstable-pod72047ab8f55fc4b47dec83af416ed460.slice - libcontainer container kubepods-burstable-pod72047ab8f55fc4b47dec83af416ed460.slice. Feb 13 20:17:26.731018 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 20:17:26.747593 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 20:17:26.786862 kubelet[2149]: I0213 20:17:26.786799 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:26.786862 kubelet[2149]: I0213 20:17:26.786845 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:26.786862 kubelet[2149]: I0213 20:17:26.786868 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/72047ab8f55fc4b47dec83af416ed460-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"72047ab8f55fc4b47dec83af416ed460\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:26.787073 kubelet[2149]: I0213 20:17:26.786884 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:26.787073 kubelet[2149]: I0213 20:17:26.786899 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:26.787073 kubelet[2149]: I0213 20:17:26.786913 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:17:26.787073 kubelet[2149]: I0213 20:17:26.786929 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/72047ab8f55fc4b47dec83af416ed460-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"72047ab8f55fc4b47dec83af416ed460\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:26.787073 kubelet[2149]: I0213 20:17:26.786945 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/72047ab8f55fc4b47dec83af416ed460-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"72047ab8f55fc4b47dec83af416ed460\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:26.787247 kubelet[2149]: I0213 20:17:26.786962 2149 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:26.986823 kubelet[2149]: E0213 20:17:26.986709 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="800ms" Feb 13 20:17:27.031181 kubelet[2149]: E0213 20:17:27.029026 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:27.031708 containerd[1437]: time="2025-02-13T20:17:27.031663713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:72047ab8f55fc4b47dec83af416ed460,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:27.046313 kubelet[2149]: E0213 20:17:27.046279 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:27.046811 containerd[1437]: time="2025-02-13T20:17:27.046625684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:27.050248 kubelet[2149]: E0213 20:17:27.050210 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:27.050584 containerd[1437]: time="2025-02-13T20:17:27.050550671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:27.091169 kubelet[2149]: I0213 20:17:27.091138 2149 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:17:27.091498 kubelet[2149]: E0213 20:17:27.091463 2149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Feb 13 20:17:27.185080 kubelet[2149]: W0213 20:17:27.185006 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Feb 13 20:17:27.185080 kubelet[2149]: E0213 20:17:27.185070 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Feb 13 20:17:27.452381 kubelet[2149]: W0213 20:17:27.452303 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Feb 13 20:17:27.452381 kubelet[2149]: E0213 20:17:27.452367 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Feb 13 20:17:27.500489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount36378070.mount: Deactivated successfully. Feb 13 20:17:27.506030 containerd[1437]: time="2025-02-13T20:17:27.505965679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:17:27.506818 containerd[1437]: time="2025-02-13T20:17:27.506784332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 20:17:27.507541 containerd[1437]: time="2025-02-13T20:17:27.507512459Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:17:27.508811 containerd[1437]: time="2025-02-13T20:17:27.508782699Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:17:27.509251 containerd[1437]: time="2025-02-13T20:17:27.509221153Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:17:27.509880 containerd[1437]: time="2025-02-13T20:17:27.509850908Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:17:27.511086 containerd[1437]: time="2025-02-13T20:17:27.511012325Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:17:27.512040 containerd[1437]: time="2025-02-13T20:17:27.512003461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:17:27.513008 containerd[1437]: time="2025-02-13T20:17:27.512973057Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 481.223663ms" Feb 13 20:17:27.517093 containerd[1437]: time="2025-02-13T20:17:27.517019599Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 466.413835ms" Feb 13 20:17:27.517985 containerd[1437]: time="2025-02-13T20:17:27.517799816Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 471.120321ms" Feb 13 20:17:27.545445 kubelet[2149]: W0213 20:17:27.545311 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Feb 13 20:17:27.545445 kubelet[2149]: E0213 20:17:27.545393 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Feb 13 20:17:27.648900 containerd[1437]: time="2025-02-13T20:17:27.648754619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:27.648900 containerd[1437]: time="2025-02-13T20:17:27.648814035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:27.648900 containerd[1437]: time="2025-02-13T20:17:27.648847547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:27.649183 containerd[1437]: time="2025-02-13T20:17:27.649026396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:27.649292 containerd[1437]: time="2025-02-13T20:17:27.649238396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:27.649386 containerd[1437]: time="2025-02-13T20:17:27.649282838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:27.649462 containerd[1437]: time="2025-02-13T20:17:27.649395905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:27.650919 containerd[1437]: time="2025-02-13T20:17:27.650146494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:27.650919 containerd[1437]: time="2025-02-13T20:17:27.650191056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:27.650919 containerd[1437]: time="2025-02-13T20:17:27.650214678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:27.650919 containerd[1437]: time="2025-02-13T20:17:27.650304082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:27.650919 containerd[1437]: time="2025-02-13T20:17:27.649927647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:27.671587 systemd[1]: Started cri-containerd-419f4d8e21b010c82e8232ffe8c0c229a4e44001b36b0d78a27eeb545f303acc.scope - libcontainer container 419f4d8e21b010c82e8232ffe8c0c229a4e44001b36b0d78a27eeb545f303acc. Feb 13 20:17:27.672823 systemd[1]: Started cri-containerd-6373952edbdaaac82aa8a9cfac4525944cf71c3c0f6d2588b618c53362e4f8e8.scope - libcontainer container 6373952edbdaaac82aa8a9cfac4525944cf71c3c0f6d2588b618c53362e4f8e8. Feb 13 20:17:27.675005 systemd[1]: Started cri-containerd-a021ac070f89b680fc9ea1def15dfaa7323b4023ef2f8d64a4dfcd8f968ac4fb.scope - libcontainer container a021ac070f89b680fc9ea1def15dfaa7323b4023ef2f8d64a4dfcd8f968ac4fb. Feb 13 20:17:27.681782 kubelet[2149]: W0213 20:17:27.681719 2149 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Feb 13 20:17:27.681782 kubelet[2149]: E0213 20:17:27.681787 2149 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Feb 13 20:17:27.702596 containerd[1437]: time="2025-02-13T20:17:27.702469792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"419f4d8e21b010c82e8232ffe8c0c229a4e44001b36b0d78a27eeb545f303acc\"" Feb 13 20:17:27.703859 kubelet[2149]: E0213 20:17:27.703723 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:27.709515 containerd[1437]: time="2025-02-13T20:17:27.709478011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:72047ab8f55fc4b47dec83af416ed460,Namespace:kube-system,Attempt:0,} returns sandbox id \"6373952edbdaaac82aa8a9cfac4525944cf71c3c0f6d2588b618c53362e4f8e8\"" Feb 13 20:17:27.710179 kubelet[2149]: E0213 20:17:27.710048 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:27.712689 containerd[1437]: time="2025-02-13T20:17:27.712657374Z" level=info msg="CreateContainer within sandbox \"419f4d8e21b010c82e8232ffe8c0c229a4e44001b36b0d78a27eeb545f303acc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:17:27.712773 containerd[1437]: time="2025-02-13T20:17:27.712736528Z" level=info msg="CreateContainer within sandbox \"6373952edbdaaac82aa8a9cfac4525944cf71c3c0f6d2588b618c53362e4f8e8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:17:27.713497 containerd[1437]: time="2025-02-13T20:17:27.713362800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"a021ac070f89b680fc9ea1def15dfaa7323b4023ef2f8d64a4dfcd8f968ac4fb\"" Feb 13 20:17:27.714038 kubelet[2149]: E0213 20:17:27.714016 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:27.715831 containerd[1437]: time="2025-02-13T20:17:27.715796058Z" level=info msg="CreateContainer within sandbox \"a021ac070f89b680fc9ea1def15dfaa7323b4023ef2f8d64a4dfcd8f968ac4fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:17:27.731765 containerd[1437]: time="2025-02-13T20:17:27.731711129Z" level=info msg="CreateContainer within sandbox \"6373952edbdaaac82aa8a9cfac4525944cf71c3c0f6d2588b618c53362e4f8e8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eec06fed4b1ca25aed9c9d4bcdd12f41bb4aec18dd511ecdd26d8eb95b8f25a0\"" Feb 13 20:17:27.732362 containerd[1437]: time="2025-02-13T20:17:27.732333197Z" level=info msg="StartContainer for \"eec06fed4b1ca25aed9c9d4bcdd12f41bb4aec18dd511ecdd26d8eb95b8f25a0\"" Feb 13 20:17:27.735639 containerd[1437]: time="2025-02-13T20:17:27.735593436Z" level=info msg="CreateContainer within sandbox \"419f4d8e21b010c82e8232ffe8c0c229a4e44001b36b0d78a27eeb545f303acc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b2a3ee8621e4e3ddc5adfe5bbe80b4ac5fc4b2e169ad2520492d92561406048d\"" Feb 13 20:17:27.736091 containerd[1437]: time="2025-02-13T20:17:27.736067924Z" level=info msg="StartContainer for \"b2a3ee8621e4e3ddc5adfe5bbe80b4ac5fc4b2e169ad2520492d92561406048d\"" Feb 13 20:17:27.736615 containerd[1437]: time="2025-02-13T20:17:27.736581569Z" level=info msg="CreateContainer within sandbox \"a021ac070f89b680fc9ea1def15dfaa7323b4023ef2f8d64a4dfcd8f968ac4fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e4da100fb75459d0a3046ae9c95956dd3b3c2f7d12206974c2606a3fb52cc1a3\"" Feb 13 20:17:27.736924 containerd[1437]: time="2025-02-13T20:17:27.736903433Z" level=info msg="StartContainer for \"e4da100fb75459d0a3046ae9c95956dd3b3c2f7d12206974c2606a3fb52cc1a3\"" Feb 13 20:17:27.763600 systemd[1]: Started cri-containerd-b2a3ee8621e4e3ddc5adfe5bbe80b4ac5fc4b2e169ad2520492d92561406048d.scope - libcontainer container b2a3ee8621e4e3ddc5adfe5bbe80b4ac5fc4b2e169ad2520492d92561406048d. Feb 13 20:17:27.764778 systemd[1]: Started cri-containerd-eec06fed4b1ca25aed9c9d4bcdd12f41bb4aec18dd511ecdd26d8eb95b8f25a0.scope - libcontainer container eec06fed4b1ca25aed9c9d4bcdd12f41bb4aec18dd511ecdd26d8eb95b8f25a0. Feb 13 20:17:27.768105 systemd[1]: Started cri-containerd-e4da100fb75459d0a3046ae9c95956dd3b3c2f7d12206974c2606a3fb52cc1a3.scope - libcontainer container e4da100fb75459d0a3046ae9c95956dd3b3c2f7d12206974c2606a3fb52cc1a3. Feb 13 20:17:27.787232 kubelet[2149]: E0213 20:17:27.787183 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="1.6s" Feb 13 20:17:27.799891 containerd[1437]: time="2025-02-13T20:17:27.799763203Z" level=info msg="StartContainer for \"b2a3ee8621e4e3ddc5adfe5bbe80b4ac5fc4b2e169ad2520492d92561406048d\" returns successfully" Feb 13 20:17:27.813891 containerd[1437]: time="2025-02-13T20:17:27.813712217Z" level=info msg="StartContainer for \"eec06fed4b1ca25aed9c9d4bcdd12f41bb4aec18dd511ecdd26d8eb95b8f25a0\" returns successfully" Feb 13 20:17:27.813891 containerd[1437]: time="2025-02-13T20:17:27.813756779Z" level=info msg="StartContainer for \"e4da100fb75459d0a3046ae9c95956dd3b3c2f7d12206974c2606a3fb52cc1a3\" returns successfully" Feb 13 20:17:27.893447 kubelet[2149]: I0213 20:17:27.893063 2149 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:17:27.893447 kubelet[2149]: E0213 20:17:27.893395 2149 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Feb 13 20:17:28.411436 kubelet[2149]: E0213 20:17:28.410188 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:28.413821 kubelet[2149]: E0213 20:17:28.413794 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:28.416561 kubelet[2149]: E0213 20:17:28.416542 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:29.418346 kubelet[2149]: E0213 20:17:29.418308 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:29.495059 kubelet[2149]: I0213 20:17:29.495024 2149 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:17:29.960389 kubelet[2149]: I0213 20:17:29.960349 2149 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 20:17:29.978780 kubelet[2149]: E0213 20:17:29.978728 2149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:17:29.986088 kubelet[2149]: E0213 20:17:29.985988 2149 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823dde2981fe094 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:17:26.379716756 +0000 UTC m=+0.860272359,LastTimestamp:2025-02-13 20:17:26.379716756 +0000 UTC m=+0.860272359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:17:30.078892 kubelet[2149]: E0213 20:17:30.078854 2149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:17:30.179811 kubelet[2149]: E0213 20:17:30.179757 2149 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:17:30.381609 kubelet[2149]: I0213 20:17:30.381567 2149 apiserver.go:52] "Watching apiserver" Feb 13 20:17:30.385369 kubelet[2149]: I0213 20:17:30.385340 2149 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:17:31.967134 systemd[1]: Reloading requested from client PID 2428 ('systemctl') (unit session-5.scope)... Feb 13 20:17:31.967151 systemd[1]: Reloading... Feb 13 20:17:32.029457 zram_generator::config[2473]: No configuration found. Feb 13 20:17:32.051336 kubelet[2149]: E0213 20:17:32.051304 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:32.108982 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:17:32.176090 systemd[1]: Reloading finished in 208 ms. Feb 13 20:17:32.208802 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:17:32.220504 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:17:32.220823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:17:32.220984 systemd[1]: kubelet.service: Consumed 1.235s CPU time, 113.7M memory peak, 0B memory swap peak. Feb 13 20:17:32.234899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:17:32.326392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:17:32.329936 (kubelet)[2509]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:17:32.369919 kubelet[2509]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:17:32.369919 kubelet[2509]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:17:32.369919 kubelet[2509]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:17:32.370620 kubelet[2509]: I0213 20:17:32.369945 2509 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:17:32.375130 kubelet[2509]: I0213 20:17:32.375041 2509 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:17:32.375313 kubelet[2509]: I0213 20:17:32.375224 2509 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:17:32.375603 kubelet[2509]: I0213 20:17:32.375575 2509 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:17:32.377912 kubelet[2509]: I0213 20:17:32.377541 2509 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:17:32.379124 kubelet[2509]: I0213 20:17:32.379081 2509 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:17:32.386408 kubelet[2509]: I0213 20:17:32.386387 2509 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:17:32.386614 kubelet[2509]: I0213 20:17:32.386590 2509 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:17:32.386760 kubelet[2509]: I0213 20:17:32.386616 2509 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:17:32.386852 kubelet[2509]: I0213 20:17:32.386765 2509 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:17:32.386852 kubelet[2509]: I0213 20:17:32.386773 2509 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:17:32.386852 kubelet[2509]: I0213 20:17:32.386807 2509 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:17:32.386916 kubelet[2509]: I0213 20:17:32.386897 2509 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:17:32.386916 kubelet[2509]: I0213 20:17:32.386909 2509 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:17:32.386964 kubelet[2509]: I0213 20:17:32.386935 2509 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:17:32.386964 kubelet[2509]: I0213 20:17:32.386948 2509 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:17:32.396486 kubelet[2509]: I0213 20:17:32.396448 2509 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:17:32.396647 kubelet[2509]: I0213 20:17:32.396624 2509 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:17:32.397066 kubelet[2509]: I0213 20:17:32.397043 2509 server.go:1264] "Started kubelet" Feb 13 20:17:32.397776 kubelet[2509]: I0213 20:17:32.397440 2509 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:17:32.400459 kubelet[2509]: I0213 20:17:32.398029 2509 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:17:32.400459 kubelet[2509]: I0213 20:17:32.399861 2509 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:17:32.400459 kubelet[2509]: I0213 20:17:32.398306 2509 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:17:32.401050 kubelet[2509]: I0213 20:17:32.400890 2509 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:17:32.401916 kubelet[2509]: I0213 20:17:32.401628 2509 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:17:32.402643 kubelet[2509]: I0213 20:17:32.402183 2509 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:17:32.402643 kubelet[2509]: I0213 20:17:32.402328 2509 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:17:32.404357 kubelet[2509]: I0213 20:17:32.404203 2509 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:17:32.404357 kubelet[2509]: I0213 20:17:32.404247 2509 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:17:32.404641 kubelet[2509]: I0213 20:17:32.404607 2509 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:17:32.406618 kubelet[2509]: E0213 20:17:32.406595 2509 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:17:32.415843 kubelet[2509]: I0213 20:17:32.415807 2509 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:17:32.417710 kubelet[2509]: I0213 20:17:32.417459 2509 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:17:32.417710 kubelet[2509]: I0213 20:17:32.417501 2509 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:17:32.417710 kubelet[2509]: I0213 20:17:32.417518 2509 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:17:32.417710 kubelet[2509]: E0213 20:17:32.417563 2509 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:17:32.437577 kubelet[2509]: I0213 20:17:32.437538 2509 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:17:32.437577 kubelet[2509]: I0213 20:17:32.437554 2509 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:17:32.437577 kubelet[2509]: I0213 20:17:32.437574 2509 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:17:32.437746 kubelet[2509]: I0213 20:17:32.437727 2509 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:17:32.437773 kubelet[2509]: I0213 20:17:32.437745 2509 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:17:32.437773 kubelet[2509]: I0213 20:17:32.437764 2509 policy_none.go:49] "None policy: Start" Feb 13 20:17:32.438381 kubelet[2509]: I0213 20:17:32.438355 2509 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:17:32.438381 kubelet[2509]: I0213 20:17:32.438378 2509 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:17:32.438577 kubelet[2509]: I0213 20:17:32.438557 2509 state_mem.go:75] "Updated machine memory state" Feb 13 20:17:32.442589 kubelet[2509]: I0213 20:17:32.442564 2509 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:17:32.443326 kubelet[2509]: I0213 20:17:32.442736 2509 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:17:32.443326 kubelet[2509]: I0213 20:17:32.443084 2509 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:17:32.519219 kubelet[2509]: I0213 20:17:32.518477 2509 topology_manager.go:215] "Topology Admit Handler" podUID="72047ab8f55fc4b47dec83af416ed460" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 20:17:32.519219 kubelet[2509]: I0213 20:17:32.518628 2509 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 20:17:32.519219 kubelet[2509]: I0213 20:17:32.518665 2509 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 20:17:32.523846 kubelet[2509]: E0213 20:17:32.523696 2509 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 20:17:32.547947 kubelet[2509]: I0213 20:17:32.547925 2509 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:17:32.553906 kubelet[2509]: I0213 20:17:32.553882 2509 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 20:17:32.554015 kubelet[2509]: I0213 20:17:32.553999 2509 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 20:17:32.603447 kubelet[2509]: I0213 20:17:32.603366 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:32.603447 kubelet[2509]: I0213 20:17:32.603416 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:17:32.603613 kubelet[2509]: I0213 20:17:32.603458 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/72047ab8f55fc4b47dec83af416ed460-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"72047ab8f55fc4b47dec83af416ed460\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:32.603613 kubelet[2509]: I0213 20:17:32.603479 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:32.603613 kubelet[2509]: I0213 20:17:32.603518 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:32.603613 kubelet[2509]: I0213 20:17:32.603562 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:32.603613 kubelet[2509]: I0213 20:17:32.603593 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:17:32.603716 kubelet[2509]: I0213 20:17:32.603614 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/72047ab8f55fc4b47dec83af416ed460-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"72047ab8f55fc4b47dec83af416ed460\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:32.603716 kubelet[2509]: I0213 20:17:32.603632 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/72047ab8f55fc4b47dec83af416ed460-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"72047ab8f55fc4b47dec83af416ed460\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:17:32.825125 kubelet[2509]: E0213 20:17:32.824949 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:32.825125 kubelet[2509]: E0213 20:17:32.824985 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:32.825125 kubelet[2509]: E0213 20:17:32.825010 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:33.387809 kubelet[2509]: I0213 20:17:33.387674 2509 apiserver.go:52] "Watching apiserver" Feb 13 20:17:33.403201 kubelet[2509]: I0213 20:17:33.403157 2509 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:17:33.426190 kubelet[2509]: E0213 20:17:33.425955 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:33.426190 kubelet[2509]: E0213 20:17:33.426095 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:33.427081 kubelet[2509]: E0213 20:17:33.427055 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:33.450602 kubelet[2509]: I0213 20:17:33.450537 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.450518824 podStartE2EDuration="1.450518824s" podCreationTimestamp="2025-02-13 20:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:33.441849784 +0000 UTC m=+1.108963977" watchObservedRunningTime="2025-02-13 20:17:33.450518824 +0000 UTC m=+1.117633017" Feb 13 20:17:33.451302 kubelet[2509]: I0213 20:17:33.451252 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.451241934 podStartE2EDuration="1.451241934s" podCreationTimestamp="2025-02-13 20:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:33.450368751 +0000 UTC m=+1.117482944" watchObservedRunningTime="2025-02-13 20:17:33.451241934 +0000 UTC m=+1.118356127" Feb 13 20:17:33.457010 kubelet[2509]: I0213 20:17:33.456362 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.456338884 podStartE2EDuration="1.456338884s" podCreationTimestamp="2025-02-13 20:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:33.456329839 +0000 UTC m=+1.123444032" watchObservedRunningTime="2025-02-13 20:17:33.456338884 +0000 UTC m=+1.123453077" Feb 13 20:17:33.684591 sudo[1576]: pam_unix(sudo:session): session closed for user root Feb 13 20:17:33.686348 sshd[1573]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:33.689768 systemd[1]: sshd@4-10.0.0.10:22-10.0.0.1:44134.service: Deactivated successfully. Feb 13 20:17:33.691300 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:17:33.691483 systemd[1]: session-5.scope: Consumed 6.607s CPU time, 189.5M memory peak, 0B memory swap peak. Feb 13 20:17:33.693112 systemd-logind[1420]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:17:33.694239 systemd-logind[1420]: Removed session 5. Feb 13 20:17:34.427951 kubelet[2509]: E0213 20:17:34.427797 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:37.345634 kubelet[2509]: E0213 20:17:37.345585 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:37.432468 kubelet[2509]: E0213 20:17:37.432416 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:38.434026 kubelet[2509]: E0213 20:17:38.433960 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:38.676911 kubelet[2509]: E0213 20:17:38.676622 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:39.435605 kubelet[2509]: E0213 20:17:39.435573 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:39.932249 kubelet[2509]: E0213 20:17:39.932159 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:40.437386 kubelet[2509]: E0213 20:17:40.436789 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:46.120131 update_engine[1424]: I20250213 20:17:46.120047 1424 update_attempter.cc:509] Updating boot flags... Feb 13 20:17:46.139486 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2586) Feb 13 20:17:46.163512 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2589) Feb 13 20:17:46.185445 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2589) Feb 13 20:17:46.744036 kubelet[2509]: I0213 20:17:46.743998 2509 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:17:46.744392 containerd[1437]: time="2025-02-13T20:17:46.744355773Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:17:46.744664 kubelet[2509]: I0213 20:17:46.744636 2509 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:17:47.268612 kubelet[2509]: I0213 20:17:47.268561 2509 topology_manager.go:215] "Topology Admit Handler" podUID="474777d0-3a2c-4895-9674-526928fb1d71" podNamespace="kube-system" podName="kube-proxy-b4vc6" Feb 13 20:17:47.270309 kubelet[2509]: I0213 20:17:47.270276 2509 topology_manager.go:215] "Topology Admit Handler" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" podNamespace="kube-flannel" podName="kube-flannel-ds-hvbks" Feb 13 20:17:47.282032 systemd[1]: Created slice kubepods-besteffort-pod474777d0_3a2c_4895_9674_526928fb1d71.slice - libcontainer container kubepods-besteffort-pod474777d0_3a2c_4895_9674_526928fb1d71.slice. Feb 13 20:17:47.296081 systemd[1]: Created slice kubepods-burstable-poddb3e2d2a_fd5b_4cce_b6ee_04e217b474ca.slice - libcontainer container kubepods-burstable-poddb3e2d2a_fd5b_4cce_b6ee_04e217b474ca.slice. Feb 13 20:17:47.296909 kubelet[2509]: I0213 20:17:47.296884 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/db3e2d2a-fd5b-4cce-b6ee-04e217b474ca-cni\") pod \"kube-flannel-ds-hvbks\" (UID: \"db3e2d2a-fd5b-4cce-b6ee-04e217b474ca\") " pod="kube-flannel/kube-flannel-ds-hvbks" Feb 13 20:17:47.297625 kubelet[2509]: I0213 20:17:47.297506 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/db3e2d2a-fd5b-4cce-b6ee-04e217b474ca-cni-plugin\") pod \"kube-flannel-ds-hvbks\" (UID: \"db3e2d2a-fd5b-4cce-b6ee-04e217b474ca\") " pod="kube-flannel/kube-flannel-ds-hvbks" Feb 13 20:17:47.297625 kubelet[2509]: I0213 20:17:47.297535 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/db3e2d2a-fd5b-4cce-b6ee-04e217b474ca-run\") pod \"kube-flannel-ds-hvbks\" (UID: \"db3e2d2a-fd5b-4cce-b6ee-04e217b474ca\") " pod="kube-flannel/kube-flannel-ds-hvbks" Feb 13 20:17:47.297625 kubelet[2509]: I0213 20:17:47.297553 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/474777d0-3a2c-4895-9674-526928fb1d71-lib-modules\") pod \"kube-proxy-b4vc6\" (UID: \"474777d0-3a2c-4895-9674-526928fb1d71\") " pod="kube-system/kube-proxy-b4vc6" Feb 13 20:17:47.297625 kubelet[2509]: I0213 20:17:47.297569 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8p9l\" (UniqueName: \"kubernetes.io/projected/474777d0-3a2c-4895-9674-526928fb1d71-kube-api-access-k8p9l\") pod \"kube-proxy-b4vc6\" (UID: \"474777d0-3a2c-4895-9674-526928fb1d71\") " pod="kube-system/kube-proxy-b4vc6" Feb 13 20:17:47.297625 kubelet[2509]: I0213 20:17:47.297586 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/474777d0-3a2c-4895-9674-526928fb1d71-kube-proxy\") pod \"kube-proxy-b4vc6\" (UID: \"474777d0-3a2c-4895-9674-526928fb1d71\") " pod="kube-system/kube-proxy-b4vc6" Feb 13 20:17:47.297943 kubelet[2509]: I0213 20:17:47.297820 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/db3e2d2a-fd5b-4cce-b6ee-04e217b474ca-flannel-cfg\") pod \"kube-flannel-ds-hvbks\" (UID: \"db3e2d2a-fd5b-4cce-b6ee-04e217b474ca\") " pod="kube-flannel/kube-flannel-ds-hvbks" Feb 13 20:17:47.297943 kubelet[2509]: I0213 20:17:47.297900 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db3e2d2a-fd5b-4cce-b6ee-04e217b474ca-xtables-lock\") pod \"kube-flannel-ds-hvbks\" (UID: \"db3e2d2a-fd5b-4cce-b6ee-04e217b474ca\") " pod="kube-flannel/kube-flannel-ds-hvbks" Feb 13 20:17:47.297943 kubelet[2509]: I0213 20:17:47.297931 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcvvd\" (UniqueName: \"kubernetes.io/projected/db3e2d2a-fd5b-4cce-b6ee-04e217b474ca-kube-api-access-zcvvd\") pod \"kube-flannel-ds-hvbks\" (UID: \"db3e2d2a-fd5b-4cce-b6ee-04e217b474ca\") " pod="kube-flannel/kube-flannel-ds-hvbks" Feb 13 20:17:47.298207 kubelet[2509]: I0213 20:17:47.297954 2509 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/474777d0-3a2c-4895-9674-526928fb1d71-xtables-lock\") pod \"kube-proxy-b4vc6\" (UID: \"474777d0-3a2c-4895-9674-526928fb1d71\") " pod="kube-system/kube-proxy-b4vc6" Feb 13 20:17:47.406885 kubelet[2509]: E0213 20:17:47.406846 2509 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 20:17:47.406885 kubelet[2509]: E0213 20:17:47.406890 2509 projected.go:200] Error preparing data for projected volume kube-api-access-zcvvd for pod kube-flannel/kube-flannel-ds-hvbks: configmap "kube-root-ca.crt" not found Feb 13 20:17:47.407047 kubelet[2509]: E0213 20:17:47.406957 2509 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db3e2d2a-fd5b-4cce-b6ee-04e217b474ca-kube-api-access-zcvvd podName:db3e2d2a-fd5b-4cce-b6ee-04e217b474ca nodeName:}" failed. No retries permitted until 2025-02-13 20:17:47.906927989 +0000 UTC m=+15.574042182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zcvvd" (UniqueName: "kubernetes.io/projected/db3e2d2a-fd5b-4cce-b6ee-04e217b474ca-kube-api-access-zcvvd") pod "kube-flannel-ds-hvbks" (UID: "db3e2d2a-fd5b-4cce-b6ee-04e217b474ca") : configmap "kube-root-ca.crt" not found Feb 13 20:17:47.407099 kubelet[2509]: E0213 20:17:47.406866 2509 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 20:17:47.407099 kubelet[2509]: E0213 20:17:47.407064 2509 projected.go:200] Error preparing data for projected volume kube-api-access-k8p9l for pod kube-system/kube-proxy-b4vc6: configmap "kube-root-ca.crt" not found Feb 13 20:17:47.407150 kubelet[2509]: E0213 20:17:47.407123 2509 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/474777d0-3a2c-4895-9674-526928fb1d71-kube-api-access-k8p9l podName:474777d0-3a2c-4895-9674-526928fb1d71 nodeName:}" failed. No retries permitted until 2025-02-13 20:17:47.907089306 +0000 UTC m=+15.574203499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k8p9l" (UniqueName: "kubernetes.io/projected/474777d0-3a2c-4895-9674-526928fb1d71-kube-api-access-k8p9l") pod "kube-proxy-b4vc6" (UID: "474777d0-3a2c-4895-9674-526928fb1d71") : configmap "kube-root-ca.crt" not found Feb 13 20:17:48.194746 kubelet[2509]: E0213 20:17:48.194665 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:48.195590 containerd[1437]: time="2025-02-13T20:17:48.195212414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b4vc6,Uid:474777d0-3a2c-4895-9674-526928fb1d71,Namespace:kube-system,Attempt:0,}" Feb 13 20:17:48.198964 kubelet[2509]: E0213 20:17:48.198921 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:48.199363 containerd[1437]: time="2025-02-13T20:17:48.199325269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-hvbks,Uid:db3e2d2a-fd5b-4cce-b6ee-04e217b474ca,Namespace:kube-flannel,Attempt:0,}" Feb 13 20:17:48.217913 containerd[1437]: time="2025-02-13T20:17:48.217806208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:48.217913 containerd[1437]: time="2025-02-13T20:17:48.217870462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:48.217913 containerd[1437]: time="2025-02-13T20:17:48.217888265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:48.218183 containerd[1437]: time="2025-02-13T20:17:48.217975044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:48.218750 containerd[1437]: time="2025-02-13T20:17:48.218670756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:17:48.218750 containerd[1437]: time="2025-02-13T20:17:48.218720006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:17:48.218750 containerd[1437]: time="2025-02-13T20:17:48.218735850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:48.218885 containerd[1437]: time="2025-02-13T20:17:48.218807465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:17:48.242628 systemd[1]: Started cri-containerd-bace7cbe2facace5b95853edfdf5884ba823d2584ae21f981b17e465480f6a07.scope - libcontainer container bace7cbe2facace5b95853edfdf5884ba823d2584ae21f981b17e465480f6a07. Feb 13 20:17:48.245394 systemd[1]: Started cri-containerd-16a1af8b29b55d8ed0e83954fccf327b3a7dba923251240d9db1714390c4c8e5.scope - libcontainer container 16a1af8b29b55d8ed0e83954fccf327b3a7dba923251240d9db1714390c4c8e5. Feb 13 20:17:48.263514 containerd[1437]: time="2025-02-13T20:17:48.263459096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b4vc6,Uid:474777d0-3a2c-4895-9674-526928fb1d71,Namespace:kube-system,Attempt:0,} returns sandbox id \"bace7cbe2facace5b95853edfdf5884ba823d2584ae21f981b17e465480f6a07\"" Feb 13 20:17:48.264497 kubelet[2509]: E0213 20:17:48.264418 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:48.271731 containerd[1437]: time="2025-02-13T20:17:48.271592424Z" level=info msg="CreateContainer within sandbox \"bace7cbe2facace5b95853edfdf5884ba823d2584ae21f981b17e465480f6a07\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:17:48.275283 containerd[1437]: time="2025-02-13T20:17:48.275250380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-hvbks,Uid:db3e2d2a-fd5b-4cce-b6ee-04e217b474ca,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"16a1af8b29b55d8ed0e83954fccf327b3a7dba923251240d9db1714390c4c8e5\"" Feb 13 20:17:48.276134 kubelet[2509]: E0213 20:17:48.276108 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:48.277325 containerd[1437]: time="2025-02-13T20:17:48.277224569Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:17:48.285934 containerd[1437]: time="2025-02-13T20:17:48.285905617Z" level=info msg="CreateContainer within sandbox \"bace7cbe2facace5b95853edfdf5884ba823d2584ae21f981b17e465480f6a07\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5437c7f073b623dea2841cfe7e20b1f3cefa3857dd9c5bceb7524032fcf5eea6\"" Feb 13 20:17:48.286455 containerd[1437]: time="2025-02-13T20:17:48.286412727Z" level=info msg="StartContainer for \"5437c7f073b623dea2841cfe7e20b1f3cefa3857dd9c5bceb7524032fcf5eea6\"" Feb 13 20:17:48.311597 systemd[1]: Started cri-containerd-5437c7f073b623dea2841cfe7e20b1f3cefa3857dd9c5bceb7524032fcf5eea6.scope - libcontainer container 5437c7f073b623dea2841cfe7e20b1f3cefa3857dd9c5bceb7524032fcf5eea6. Feb 13 20:17:48.334647 containerd[1437]: time="2025-02-13T20:17:48.334602807Z" level=info msg="StartContainer for \"5437c7f073b623dea2841cfe7e20b1f3cefa3857dd9c5bceb7524032fcf5eea6\" returns successfully" Feb 13 20:17:48.455360 kubelet[2509]: E0213 20:17:48.454584 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:48.462505 kubelet[2509]: I0213 20:17:48.462413 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b4vc6" podStartSLOduration=1.462397438 podStartE2EDuration="1.462397438s" podCreationTimestamp="2025-02-13 20:17:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:17:48.461874084 +0000 UTC m=+16.128988277" watchObservedRunningTime="2025-02-13 20:17:48.462397438 +0000 UTC m=+16.129511631" Feb 13 20:17:49.451559 containerd[1437]: time="2025-02-13T20:17:49.451502775Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:17:49.451914 containerd[1437]: time="2025-02-13T20:17:49.451580111Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:17:49.451959 kubelet[2509]: E0213 20:17:49.451719 2509 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:17:49.451959 kubelet[2509]: E0213 20:17:49.451784 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:17:49.452347 kubelet[2509]: E0213 20:17:49.451965 2509 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcvvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-hvbks_kube-flannel(db3e2d2a-fd5b-4cce-b6ee-04e217b474ca): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:17:49.453341 kubelet[2509]: E0213 20:17:49.451998 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:17:49.454382 kubelet[2509]: E0213 20:17:49.454324 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:17:49.454874 kubelet[2509]: E0213 20:17:49.454844 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:17:57.987321 systemd[1]: Started sshd@5-10.0.0.10:22-10.0.0.1:54836.service - OpenSSH per-connection server daemon (10.0.0.1:54836). Feb 13 20:17:58.023854 sshd[2833]: Accepted publickey for core from 10.0.0.1 port 54836 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:17:58.025266 sshd[2833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:17:58.028622 systemd-logind[1420]: New session 6 of user core. Feb 13 20:17:58.043631 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:17:58.158125 sshd[2833]: pam_unix(sshd:session): session closed for user core Feb 13 20:17:58.161542 systemd[1]: sshd@5-10.0.0.10:22-10.0.0.1:54836.service: Deactivated successfully. Feb 13 20:17:58.163185 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:17:58.163809 systemd-logind[1420]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:17:58.164701 systemd-logind[1420]: Removed session 6. Feb 13 20:18:00.418732 kubelet[2509]: E0213 20:18:00.418574 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:00.421832 containerd[1437]: time="2025-02-13T20:18:00.421779747Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:18:01.564619 containerd[1437]: time="2025-02-13T20:18:01.564542433Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:18:01.565075 containerd[1437]: time="2025-02-13T20:18:01.564611441Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11109" Feb 13 20:18:01.565111 kubelet[2509]: E0213 20:18:01.564746 2509 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:18:01.565111 kubelet[2509]: E0213 20:18:01.564803 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:18:01.565408 kubelet[2509]: E0213 20:18:01.564888 2509 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcvvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-hvbks_kube-flannel(db3e2d2a-fd5b-4cce-b6ee-04e217b474ca): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:18:01.565482 kubelet[2509]: E0213 20:18:01.564918 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:18:03.169751 systemd[1]: Started sshd@6-10.0.0.10:22-10.0.0.1:40126.service - OpenSSH per-connection server daemon (10.0.0.1:40126). Feb 13 20:18:03.207326 sshd[2851]: Accepted publickey for core from 10.0.0.1 port 40126 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:03.208545 sshd[2851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:03.212442 systemd-logind[1420]: New session 7 of user core. Feb 13 20:18:03.225576 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:18:03.329320 sshd[2851]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:03.332607 systemd[1]: sshd@6-10.0.0.10:22-10.0.0.1:40126.service: Deactivated successfully. Feb 13 20:18:03.334144 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:18:03.336253 systemd-logind[1420]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:18:03.337406 systemd-logind[1420]: Removed session 7. Feb 13 20:18:08.341210 systemd[1]: Started sshd@7-10.0.0.10:22-10.0.0.1:40132.service - OpenSSH per-connection server daemon (10.0.0.1:40132). Feb 13 20:18:08.378039 sshd[2867]: Accepted publickey for core from 10.0.0.1 port 40132 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:08.379368 sshd[2867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:08.383244 systemd-logind[1420]: New session 8 of user core. Feb 13 20:18:08.391618 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:18:08.497262 sshd[2867]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:08.500506 systemd[1]: sshd@7-10.0.0.10:22-10.0.0.1:40132.service: Deactivated successfully. Feb 13 20:18:08.502787 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:18:08.503480 systemd-logind[1420]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:18:08.504288 systemd-logind[1420]: Removed session 8. Feb 13 20:18:13.507876 systemd[1]: Started sshd@8-10.0.0.10:22-10.0.0.1:34630.service - OpenSSH per-connection server daemon (10.0.0.1:34630). Feb 13 20:18:13.544803 sshd[2882]: Accepted publickey for core from 10.0.0.1 port 34630 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:13.546052 sshd[2882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:13.549410 systemd-logind[1420]: New session 9 of user core. Feb 13 20:18:13.559624 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:18:13.664915 sshd[2882]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:13.668212 systemd[1]: sshd@8-10.0.0.10:22-10.0.0.1:34630.service: Deactivated successfully. Feb 13 20:18:13.669832 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:18:13.670388 systemd-logind[1420]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:18:13.671306 systemd-logind[1420]: Removed session 9. Feb 13 20:18:15.418879 kubelet[2509]: E0213 20:18:15.418816 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:15.420091 kubelet[2509]: E0213 20:18:15.420060 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:18:18.675932 systemd[1]: Started sshd@9-10.0.0.10:22-10.0.0.1:34634.service - OpenSSH per-connection server daemon (10.0.0.1:34634). Feb 13 20:18:18.713251 sshd[2899]: Accepted publickey for core from 10.0.0.1 port 34634 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:18.714528 sshd[2899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:18.718233 systemd-logind[1420]: New session 10 of user core. Feb 13 20:18:18.724580 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:18:18.827172 sshd[2899]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:18.830912 systemd-logind[1420]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:18:18.831255 systemd[1]: sshd@9-10.0.0.10:22-10.0.0.1:34634.service: Deactivated successfully. Feb 13 20:18:18.833836 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:18:18.835126 systemd-logind[1420]: Removed session 10. Feb 13 20:18:23.837872 systemd[1]: Started sshd@10-10.0.0.10:22-10.0.0.1:53502.service - OpenSSH per-connection server daemon (10.0.0.1:53502). Feb 13 20:18:23.874788 sshd[2914]: Accepted publickey for core from 10.0.0.1 port 53502 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:23.876004 sshd[2914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:23.879595 systemd-logind[1420]: New session 11 of user core. Feb 13 20:18:23.896567 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:18:24.001654 sshd[2914]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:24.005316 systemd[1]: sshd@10-10.0.0.10:22-10.0.0.1:53502.service: Deactivated successfully. Feb 13 20:18:24.007088 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:18:24.007879 systemd-logind[1420]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:18:24.008734 systemd-logind[1420]: Removed session 11. Feb 13 20:18:29.011939 systemd[1]: Started sshd@11-10.0.0.10:22-10.0.0.1:53508.service - OpenSSH per-connection server daemon (10.0.0.1:53508). Feb 13 20:18:29.048757 sshd[2929]: Accepted publickey for core from 10.0.0.1 port 53508 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:29.049994 sshd[2929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:29.053828 systemd-logind[1420]: New session 12 of user core. Feb 13 20:18:29.066579 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:18:29.168354 sshd[2929]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:29.172124 systemd[1]: sshd@11-10.0.0.10:22-10.0.0.1:53508.service: Deactivated successfully. Feb 13 20:18:29.173751 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:18:29.175174 systemd-logind[1420]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:18:29.176396 systemd-logind[1420]: Removed session 12. Feb 13 20:18:30.418498 kubelet[2509]: E0213 20:18:30.418175 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:30.419546 containerd[1437]: time="2025-02-13T20:18:30.419494005Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:18:31.549810 containerd[1437]: time="2025-02-13T20:18:31.549746283Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:18:31.549810 containerd[1437]: time="2025-02-13T20:18:31.549833208Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:18:31.550266 kubelet[2509]: E0213 20:18:31.549924 2509 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:18:31.550266 kubelet[2509]: E0213 20:18:31.549970 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:18:31.550563 kubelet[2509]: E0213 20:18:31.550080 2509 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcvvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-hvbks_kube-flannel(db3e2d2a-fd5b-4cce-b6ee-04e217b474ca): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:18:31.550625 kubelet[2509]: E0213 20:18:31.550108 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:18:34.179153 systemd[1]: Started sshd@12-10.0.0.10:22-10.0.0.1:51884.service - OpenSSH per-connection server daemon (10.0.0.1:51884). Feb 13 20:18:34.215961 sshd[2946]: Accepted publickey for core from 10.0.0.1 port 51884 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:34.217179 sshd[2946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:34.221217 systemd-logind[1420]: New session 13 of user core. Feb 13 20:18:34.238573 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:18:34.344363 sshd[2946]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:34.347497 systemd[1]: sshd@12-10.0.0.10:22-10.0.0.1:51884.service: Deactivated successfully. Feb 13 20:18:34.349767 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:18:34.350687 systemd-logind[1420]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:18:34.351766 systemd-logind[1420]: Removed session 13. Feb 13 20:18:39.354895 systemd[1]: Started sshd@13-10.0.0.10:22-10.0.0.1:51890.service - OpenSSH per-connection server daemon (10.0.0.1:51890). Feb 13 20:18:39.391163 sshd[2961]: Accepted publickey for core from 10.0.0.1 port 51890 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:39.392378 sshd[2961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:39.395844 systemd-logind[1420]: New session 14 of user core. Feb 13 20:18:39.403637 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:18:39.508391 sshd[2961]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:39.511531 systemd[1]: sshd@13-10.0.0.10:22-10.0.0.1:51890.service: Deactivated successfully. Feb 13 20:18:39.513773 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:18:39.514345 systemd-logind[1420]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:18:39.515238 systemd-logind[1420]: Removed session 14. Feb 13 20:18:42.418402 kubelet[2509]: E0213 20:18:42.418342 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:42.420249 kubelet[2509]: E0213 20:18:42.420172 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:18:44.521895 systemd[1]: Started sshd@14-10.0.0.10:22-10.0.0.1:37604.service - OpenSSH per-connection server daemon (10.0.0.1:37604). Feb 13 20:18:44.559863 sshd[2977]: Accepted publickey for core from 10.0.0.1 port 37604 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:44.561051 sshd[2977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:44.564655 systemd-logind[1420]: New session 15 of user core. Feb 13 20:18:44.579563 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:18:44.686976 sshd[2977]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:44.690881 systemd[1]: sshd@14-10.0.0.10:22-10.0.0.1:37604.service: Deactivated successfully. Feb 13 20:18:44.693217 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:18:44.694257 systemd-logind[1420]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:18:44.695583 systemd-logind[1420]: Removed session 15. Feb 13 20:18:49.697159 systemd[1]: Started sshd@15-10.0.0.10:22-10.0.0.1:37612.service - OpenSSH per-connection server daemon (10.0.0.1:37612). Feb 13 20:18:49.734013 sshd[2994]: Accepted publickey for core from 10.0.0.1 port 37612 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:49.735241 sshd[2994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:49.739029 systemd-logind[1420]: New session 16 of user core. Feb 13 20:18:49.744613 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:18:49.848708 sshd[2994]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:49.851851 systemd[1]: sshd@15-10.0.0.10:22-10.0.0.1:37612.service: Deactivated successfully. Feb 13 20:18:49.853478 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:18:49.854938 systemd-logind[1420]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:18:49.855678 systemd-logind[1420]: Removed session 16. Feb 13 20:18:50.418911 kubelet[2509]: E0213 20:18:50.418880 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:54.858811 systemd[1]: Started sshd@16-10.0.0.10:22-10.0.0.1:32874.service - OpenSSH per-connection server daemon (10.0.0.1:32874). Feb 13 20:18:54.895915 sshd[3010]: Accepted publickey for core from 10.0.0.1 port 32874 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:18:54.897088 sshd[3010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:18:54.900626 systemd-logind[1420]: New session 17 of user core. Feb 13 20:18:54.913655 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:18:55.016026 sshd[3010]: pam_unix(sshd:session): session closed for user core Feb 13 20:18:55.019206 systemd[1]: sshd@16-10.0.0.10:22-10.0.0.1:32874.service: Deactivated successfully. Feb 13 20:18:55.021246 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:18:55.021830 systemd-logind[1420]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:18:55.022815 systemd-logind[1420]: Removed session 17. Feb 13 20:18:55.418634 kubelet[2509]: E0213 20:18:55.418578 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:18:55.419751 kubelet[2509]: E0213 20:18:55.419703 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:18:58.419298 kubelet[2509]: E0213 20:18:58.419194 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:00.026937 systemd[1]: Started sshd@17-10.0.0.10:22-10.0.0.1:32876.service - OpenSSH per-connection server daemon (10.0.0.1:32876). Feb 13 20:19:00.063863 sshd[3025]: Accepted publickey for core from 10.0.0.1 port 32876 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:00.065096 sshd[3025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:00.069003 systemd-logind[1420]: New session 18 of user core. Feb 13 20:19:00.072552 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:19:00.177688 sshd[3025]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:00.180760 systemd[1]: sshd@17-10.0.0.10:22-10.0.0.1:32876.service: Deactivated successfully. Feb 13 20:19:00.182310 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:19:00.182938 systemd-logind[1420]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:19:00.183868 systemd-logind[1420]: Removed session 18. Feb 13 20:19:00.419438 kubelet[2509]: E0213 20:19:00.419277 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:05.191897 systemd[1]: Started sshd@18-10.0.0.10:22-10.0.0.1:58920.service - OpenSSH per-connection server daemon (10.0.0.1:58920). Feb 13 20:19:05.228798 sshd[3041]: Accepted publickey for core from 10.0.0.1 port 58920 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:05.230121 sshd[3041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:05.233641 systemd-logind[1420]: New session 19 of user core. Feb 13 20:19:05.244568 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:19:05.347348 sshd[3041]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:05.350574 systemd[1]: sshd@18-10.0.0.10:22-10.0.0.1:58920.service: Deactivated successfully. Feb 13 20:19:05.352829 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:19:05.353379 systemd-logind[1420]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:19:05.354276 systemd-logind[1420]: Removed session 19. Feb 13 20:19:05.419064 kubelet[2509]: E0213 20:19:05.419036 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:10.357794 systemd[1]: Started sshd@19-10.0.0.10:22-10.0.0.1:58928.service - OpenSSH per-connection server daemon (10.0.0.1:58928). Feb 13 20:19:10.394309 sshd[3057]: Accepted publickey for core from 10.0.0.1 port 58928 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:10.395529 sshd[3057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:10.398892 systemd-logind[1420]: New session 20 of user core. Feb 13 20:19:10.416559 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:19:10.418832 kubelet[2509]: E0213 20:19:10.418499 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:10.419499 kubelet[2509]: E0213 20:19:10.419467 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:19:10.520879 sshd[3057]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:10.523463 systemd[1]: sshd@19-10.0.0.10:22-10.0.0.1:58928.service: Deactivated successfully. Feb 13 20:19:10.525946 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:19:10.528624 systemd-logind[1420]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:19:10.529634 systemd-logind[1420]: Removed session 20. Feb 13 20:19:15.531848 systemd[1]: Started sshd@20-10.0.0.10:22-10.0.0.1:49710.service - OpenSSH per-connection server daemon (10.0.0.1:49710). Feb 13 20:19:15.568777 sshd[3072]: Accepted publickey for core from 10.0.0.1 port 49710 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:15.569954 sshd[3072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:15.573834 systemd-logind[1420]: New session 21 of user core. Feb 13 20:19:15.586585 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:19:15.692025 sshd[3072]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:15.695574 systemd[1]: sshd@20-10.0.0.10:22-10.0.0.1:49710.service: Deactivated successfully. Feb 13 20:19:15.697114 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:19:15.698307 systemd-logind[1420]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:19:15.699215 systemd-logind[1420]: Removed session 21. Feb 13 20:19:20.702978 systemd[1]: Started sshd@21-10.0.0.10:22-10.0.0.1:49716.service - OpenSSH per-connection server daemon (10.0.0.1:49716). Feb 13 20:19:20.739889 sshd[3090]: Accepted publickey for core from 10.0.0.1 port 49716 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:20.741044 sshd[3090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:20.744876 systemd-logind[1420]: New session 22 of user core. Feb 13 20:19:20.751563 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:19:20.862891 sshd[3090]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:20.866183 systemd[1]: sshd@21-10.0.0.10:22-10.0.0.1:49716.service: Deactivated successfully. Feb 13 20:19:20.868645 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:19:20.869554 systemd-logind[1420]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:19:20.870569 systemd-logind[1420]: Removed session 22. Feb 13 20:19:25.418348 kubelet[2509]: E0213 20:19:25.418135 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:25.419048 containerd[1437]: time="2025-02-13T20:19:25.419001830Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:19:25.877295 systemd[1]: Started sshd@22-10.0.0.10:22-10.0.0.1:45094.service - OpenSSH per-connection server daemon (10.0.0.1:45094). Feb 13 20:19:25.917626 sshd[3105]: Accepted publickey for core from 10.0.0.1 port 45094 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:25.918804 sshd[3105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:25.922991 systemd-logind[1420]: New session 23 of user core. Feb 13 20:19:25.933560 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:19:26.038356 sshd[3105]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:26.041570 systemd[1]: sshd@22-10.0.0.10:22-10.0.0.1:45094.service: Deactivated successfully. Feb 13 20:19:26.043372 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:19:26.044096 systemd-logind[1420]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:19:26.044921 systemd-logind[1420]: Removed session 23. Feb 13 20:19:26.535551 containerd[1437]: time="2025-02-13T20:19:26.535501425Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:19:26.535967 containerd[1437]: time="2025-02-13T20:19:26.535576827Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:19:26.536000 kubelet[2509]: E0213 20:19:26.535687 2509 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:19:26.536000 kubelet[2509]: E0213 20:19:26.535770 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:19:26.536240 kubelet[2509]: E0213 20:19:26.535857 2509 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcvvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-hvbks_kube-flannel(db3e2d2a-fd5b-4cce-b6ee-04e217b474ca): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:19:26.536299 kubelet[2509]: E0213 20:19:26.535886 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:19:31.052785 systemd[1]: Started sshd@23-10.0.0.10:22-10.0.0.1:45106.service - OpenSSH per-connection server daemon (10.0.0.1:45106). Feb 13 20:19:31.089479 sshd[3120]: Accepted publickey for core from 10.0.0.1 port 45106 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:31.090664 sshd[3120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:31.094102 systemd-logind[1420]: New session 24 of user core. Feb 13 20:19:31.101632 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:19:31.206998 sshd[3120]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:31.210187 systemd[1]: sshd@23-10.0.0.10:22-10.0.0.1:45106.service: Deactivated successfully. Feb 13 20:19:31.211701 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:19:31.212225 systemd-logind[1420]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:19:31.212981 systemd-logind[1420]: Removed session 24. Feb 13 20:19:32.417410 kubelet[2509]: E0213 20:19:32.417378 2509 kubelet_node_status.go:456] "Node not becoming ready in time after startup" Feb 13 20:19:32.467907 kubelet[2509]: E0213 20:19:32.467871 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:36.216596 systemd[1]: Started sshd@24-10.0.0.10:22-10.0.0.1:47534.service - OpenSSH per-connection server daemon (10.0.0.1:47534). Feb 13 20:19:36.253279 sshd[3137]: Accepted publickey for core from 10.0.0.1 port 47534 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:36.254266 sshd[3137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:36.258198 systemd-logind[1420]: New session 25 of user core. Feb 13 20:19:36.271624 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:19:36.374341 sshd[3137]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:36.377405 systemd[1]: sshd@24-10.0.0.10:22-10.0.0.1:47534.service: Deactivated successfully. Feb 13 20:19:36.379063 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:19:36.379695 systemd-logind[1420]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:19:36.380365 systemd-logind[1420]: Removed session 25. Feb 13 20:19:37.418659 kubelet[2509]: E0213 20:19:37.418583 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:37.419713 kubelet[2509]: E0213 20:19:37.419655 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:19:37.469440 kubelet[2509]: E0213 20:19:37.469329 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:41.388886 systemd[1]: Started sshd@25-10.0.0.10:22-10.0.0.1:47546.service - OpenSSH per-connection server daemon (10.0.0.1:47546). Feb 13 20:19:41.425218 sshd[3152]: Accepted publickey for core from 10.0.0.1 port 47546 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:41.426327 sshd[3152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:41.430310 systemd-logind[1420]: New session 26 of user core. Feb 13 20:19:41.440584 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:19:41.544804 sshd[3152]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:41.547860 systemd[1]: sshd@25-10.0.0.10:22-10.0.0.1:47546.service: Deactivated successfully. Feb 13 20:19:41.549631 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:19:41.550380 systemd-logind[1420]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:19:41.551200 systemd-logind[1420]: Removed session 26. Feb 13 20:19:42.470351 kubelet[2509]: E0213 20:19:42.470307 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:46.562875 systemd[1]: Started sshd@26-10.0.0.10:22-10.0.0.1:59782.service - OpenSSH per-connection server daemon (10.0.0.1:59782). Feb 13 20:19:46.600956 sshd[3168]: Accepted publickey for core from 10.0.0.1 port 59782 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:46.602143 sshd[3168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:46.605945 systemd-logind[1420]: New session 27 of user core. Feb 13 20:19:46.614656 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:19:46.715255 sshd[3168]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:46.718258 systemd[1]: sshd@26-10.0.0.10:22-10.0.0.1:59782.service: Deactivated successfully. Feb 13 20:19:46.719830 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:19:46.720412 systemd-logind[1420]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:19:46.721199 systemd-logind[1420]: Removed session 27. Feb 13 20:19:47.471750 kubelet[2509]: E0213 20:19:47.471704 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:51.418865 kubelet[2509]: E0213 20:19:51.418782 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:19:51.424299 kubelet[2509]: E0213 20:19:51.424252 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:19:51.725873 systemd[1]: Started sshd@27-10.0.0.10:22-10.0.0.1:59792.service - OpenSSH per-connection server daemon (10.0.0.1:59792). Feb 13 20:19:51.762550 sshd[3187]: Accepted publickey for core from 10.0.0.1 port 59792 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:51.763714 sshd[3187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:51.767595 systemd-logind[1420]: New session 28 of user core. Feb 13 20:19:51.780561 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:19:51.883648 sshd[3187]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:51.886956 systemd[1]: sshd@27-10.0.0.10:22-10.0.0.1:59792.service: Deactivated successfully. Feb 13 20:19:51.888756 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:19:51.889976 systemd-logind[1420]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:19:51.890848 systemd-logind[1420]: Removed session 28. Feb 13 20:19:52.473112 kubelet[2509]: E0213 20:19:52.473071 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:19:56.894867 systemd[1]: Started sshd@28-10.0.0.10:22-10.0.0.1:43004.service - OpenSSH per-connection server daemon (10.0.0.1:43004). Feb 13 20:19:56.931645 sshd[3202]: Accepted publickey for core from 10.0.0.1 port 43004 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:19:56.932934 sshd[3202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:19:56.936271 systemd-logind[1420]: New session 29 of user core. Feb 13 20:19:56.947559 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:19:57.051452 sshd[3202]: pam_unix(sshd:session): session closed for user core Feb 13 20:19:57.054807 systemd[1]: sshd@28-10.0.0.10:22-10.0.0.1:43004.service: Deactivated successfully. Feb 13 20:19:57.056339 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:19:57.056996 systemd-logind[1420]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:19:57.058061 systemd-logind[1420]: Removed session 29. Feb 13 20:19:57.474264 kubelet[2509]: E0213 20:19:57.474225 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:02.062904 systemd[1]: Started sshd@29-10.0.0.10:22-10.0.0.1:43010.service - OpenSSH per-connection server daemon (10.0.0.1:43010). Feb 13 20:20:02.099902 sshd[3218]: Accepted publickey for core from 10.0.0.1 port 43010 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:02.101166 sshd[3218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:02.105613 systemd-logind[1420]: New session 30 of user core. Feb 13 20:20:02.115568 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 20:20:02.223660 sshd[3218]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:02.226899 systemd[1]: sshd@29-10.0.0.10:22-10.0.0.1:43010.service: Deactivated successfully. Feb 13 20:20:02.229340 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 20:20:02.230247 systemd-logind[1420]: Session 30 logged out. Waiting for processes to exit. Feb 13 20:20:02.231317 systemd-logind[1420]: Removed session 30. Feb 13 20:20:02.418805 kubelet[2509]: E0213 20:20:02.418684 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:02.419538 kubelet[2509]: E0213 20:20:02.419456 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:20:02.475139 kubelet[2509]: E0213 20:20:02.475093 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:06.419208 kubelet[2509]: E0213 20:20:06.418885 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:07.238215 systemd[1]: Started sshd@30-10.0.0.10:22-10.0.0.1:51348.service - OpenSSH per-connection server daemon (10.0.0.1:51348). Feb 13 20:20:07.277617 sshd[3237]: Accepted publickey for core from 10.0.0.1 port 51348 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:07.278891 sshd[3237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:07.283055 systemd-logind[1420]: New session 31 of user core. Feb 13 20:20:07.292563 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 20:20:07.397757 sshd[3237]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:07.400845 systemd[1]: sshd@30-10.0.0.10:22-10.0.0.1:51348.service: Deactivated successfully. Feb 13 20:20:07.402389 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 20:20:07.404080 systemd-logind[1420]: Session 31 logged out. Waiting for processes to exit. Feb 13 20:20:07.404982 systemd-logind[1420]: Removed session 31. Feb 13 20:20:07.476719 kubelet[2509]: E0213 20:20:07.476670 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:10.418491 kubelet[2509]: E0213 20:20:10.418457 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:12.408933 systemd[1]: Started sshd@31-10.0.0.10:22-10.0.0.1:51350.service - OpenSSH per-connection server daemon (10.0.0.1:51350). Feb 13 20:20:12.420470 kubelet[2509]: E0213 20:20:12.420364 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:12.445566 sshd[3252]: Accepted publickey for core from 10.0.0.1 port 51350 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:12.446754 sshd[3252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:12.450488 systemd-logind[1420]: New session 32 of user core. Feb 13 20:20:12.457553 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 20:20:12.477991 kubelet[2509]: E0213 20:20:12.477961 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:12.565127 sshd[3252]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:12.568861 systemd[1]: sshd@31-10.0.0.10:22-10.0.0.1:51350.service: Deactivated successfully. Feb 13 20:20:12.570939 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 20:20:12.571542 systemd-logind[1420]: Session 32 logged out. Waiting for processes to exit. Feb 13 20:20:12.572551 systemd-logind[1420]: Removed session 32. Feb 13 20:20:13.418725 kubelet[2509]: E0213 20:20:13.418680 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:13.419400 kubelet[2509]: E0213 20:20:13.419365 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:20:17.479322 kubelet[2509]: E0213 20:20:17.479287 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:17.576017 systemd[1]: Started sshd@32-10.0.0.10:22-10.0.0.1:46602.service - OpenSSH per-connection server daemon (10.0.0.1:46602). Feb 13 20:20:17.613055 sshd[3268]: Accepted publickey for core from 10.0.0.1 port 46602 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:17.614220 sshd[3268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:17.617805 systemd-logind[1420]: New session 33 of user core. Feb 13 20:20:17.631604 systemd[1]: Started session-33.scope - Session 33 of User core. Feb 13 20:20:17.737115 sshd[3268]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:17.740858 systemd[1]: sshd@32-10.0.0.10:22-10.0.0.1:46602.service: Deactivated successfully. Feb 13 20:20:17.743152 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 20:20:17.743856 systemd-logind[1420]: Session 33 logged out. Waiting for processes to exit. Feb 13 20:20:17.745793 systemd-logind[1420]: Removed session 33. Feb 13 20:20:22.418685 kubelet[2509]: E0213 20:20:22.418566 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:22.480515 kubelet[2509]: E0213 20:20:22.480479 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:22.751020 systemd[1]: Started sshd@33-10.0.0.10:22-10.0.0.1:45930.service - OpenSSH per-connection server daemon (10.0.0.1:45930). Feb 13 20:20:22.787781 sshd[3285]: Accepted publickey for core from 10.0.0.1 port 45930 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:22.788895 sshd[3285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:22.792312 systemd-logind[1420]: New session 34 of user core. Feb 13 20:20:22.802632 systemd[1]: Started session-34.scope - Session 34 of User core. Feb 13 20:20:22.906326 sshd[3285]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:22.910096 systemd[1]: sshd@33-10.0.0.10:22-10.0.0.1:45930.service: Deactivated successfully. Feb 13 20:20:22.911898 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 20:20:22.912473 systemd-logind[1420]: Session 34 logged out. Waiting for processes to exit. Feb 13 20:20:22.913304 systemd-logind[1420]: Removed session 34. Feb 13 20:20:25.420178 kubelet[2509]: E0213 20:20:25.418293 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:25.422450 kubelet[2509]: E0213 20:20:25.420320 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:20:27.481178 kubelet[2509]: E0213 20:20:27.481126 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:27.917076 systemd[1]: Started sshd@34-10.0.0.10:22-10.0.0.1:45942.service - OpenSSH per-connection server daemon (10.0.0.1:45942). Feb 13 20:20:27.953813 sshd[3301]: Accepted publickey for core from 10.0.0.1 port 45942 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:27.955005 sshd[3301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:27.961052 systemd-logind[1420]: New session 35 of user core. Feb 13 20:20:27.976638 systemd[1]: Started session-35.scope - Session 35 of User core. Feb 13 20:20:28.079617 sshd[3301]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:28.082083 systemd[1]: sshd@34-10.0.0.10:22-10.0.0.1:45942.service: Deactivated successfully. Feb 13 20:20:28.083657 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 20:20:28.084881 systemd-logind[1420]: Session 35 logged out. Waiting for processes to exit. Feb 13 20:20:28.085896 systemd-logind[1420]: Removed session 35. Feb 13 20:20:32.482437 kubelet[2509]: E0213 20:20:32.482379 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:33.089888 systemd[1]: Started sshd@35-10.0.0.10:22-10.0.0.1:40104.service - OpenSSH per-connection server daemon (10.0.0.1:40104). Feb 13 20:20:33.127101 sshd[3318]: Accepted publickey for core from 10.0.0.1 port 40104 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:33.128293 sshd[3318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:33.132077 systemd-logind[1420]: New session 36 of user core. Feb 13 20:20:33.145564 systemd[1]: Started session-36.scope - Session 36 of User core. Feb 13 20:20:33.248915 sshd[3318]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:33.251349 systemd[1]: sshd@35-10.0.0.10:22-10.0.0.1:40104.service: Deactivated successfully. Feb 13 20:20:33.253863 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 20:20:33.255058 systemd-logind[1420]: Session 36 logged out. Waiting for processes to exit. Feb 13 20:20:33.255832 systemd-logind[1420]: Removed session 36. Feb 13 20:20:36.419006 kubelet[2509]: E0213 20:20:36.418774 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:36.419438 kubelet[2509]: E0213 20:20:36.419382 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:20:37.483763 kubelet[2509]: E0213 20:20:37.483713 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:38.259981 systemd[1]: Started sshd@36-10.0.0.10:22-10.0.0.1:40112.service - OpenSSH per-connection server daemon (10.0.0.1:40112). Feb 13 20:20:38.296877 sshd[3333]: Accepted publickey for core from 10.0.0.1 port 40112 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:38.298081 sshd[3333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:38.301918 systemd-logind[1420]: New session 37 of user core. Feb 13 20:20:38.308602 systemd[1]: Started session-37.scope - Session 37 of User core. Feb 13 20:20:38.413513 sshd[3333]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:38.416602 systemd[1]: sshd@36-10.0.0.10:22-10.0.0.1:40112.service: Deactivated successfully. Feb 13 20:20:38.419481 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 20:20:38.420881 systemd-logind[1420]: Session 37 logged out. Waiting for processes to exit. Feb 13 20:20:38.421863 systemd-logind[1420]: Removed session 37. Feb 13 20:20:42.484916 kubelet[2509]: E0213 20:20:42.484869 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:43.424047 systemd[1]: Started sshd@37-10.0.0.10:22-10.0.0.1:38020.service - OpenSSH per-connection server daemon (10.0.0.1:38020). Feb 13 20:20:43.460410 sshd[3348]: Accepted publickey for core from 10.0.0.1 port 38020 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:43.461652 sshd[3348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:43.464964 systemd-logind[1420]: New session 38 of user core. Feb 13 20:20:43.474565 systemd[1]: Started session-38.scope - Session 38 of User core. Feb 13 20:20:43.577760 sshd[3348]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:43.580846 systemd[1]: sshd@37-10.0.0.10:22-10.0.0.1:38020.service: Deactivated successfully. Feb 13 20:20:43.583510 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 20:20:43.584119 systemd-logind[1420]: Session 38 logged out. Waiting for processes to exit. Feb 13 20:20:43.585088 systemd-logind[1420]: Removed session 38. Feb 13 20:20:47.486022 kubelet[2509]: E0213 20:20:47.485983 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:48.419258 kubelet[2509]: E0213 20:20:48.418870 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:20:48.419899 containerd[1437]: time="2025-02-13T20:20:48.419739392Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:20:48.592715 systemd[1]: Started sshd@38-10.0.0.10:22-10.0.0.1:38036.service - OpenSSH per-connection server daemon (10.0.0.1:38036). Feb 13 20:20:48.630050 sshd[3365]: Accepted publickey for core from 10.0.0.1 port 38036 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:48.631295 sshd[3365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:48.635498 systemd-logind[1420]: New session 39 of user core. Feb 13 20:20:48.643598 systemd[1]: Started session-39.scope - Session 39 of User core. Feb 13 20:20:48.749872 sshd[3365]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:48.753164 systemd[1]: sshd@38-10.0.0.10:22-10.0.0.1:38036.service: Deactivated successfully. Feb 13 20:20:48.754776 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 20:20:48.755842 systemd-logind[1420]: Session 39 logged out. Waiting for processes to exit. Feb 13 20:20:48.756832 systemd-logind[1420]: Removed session 39. Feb 13 20:20:49.725729 containerd[1437]: time="2025-02-13T20:20:49.725671523Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:20:49.726190 containerd[1437]: time="2025-02-13T20:20:49.725755964Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=13143" Feb 13 20:20:49.726222 kubelet[2509]: E0213 20:20:49.725861 2509 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:20:49.726222 kubelet[2509]: E0213 20:20:49.725898 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:20:49.726496 kubelet[2509]: E0213 20:20:49.725993 2509 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcvvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-hvbks_kube-flannel(db3e2d2a-fd5b-4cce-b6ee-04e217b474ca): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:20:49.726556 kubelet[2509]: E0213 20:20:49.726025 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:773049a6464eca8d26c97fb59410cb8385fe59b2f9a68ce7ae2fbe2d49d63585: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:20:52.487536 kubelet[2509]: E0213 20:20:52.487402 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:53.767929 systemd[1]: Started sshd@39-10.0.0.10:22-10.0.0.1:49200.service - OpenSSH per-connection server daemon (10.0.0.1:49200). Feb 13 20:20:53.809118 sshd[3381]: Accepted publickey for core from 10.0.0.1 port 49200 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:53.810272 sshd[3381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:53.817483 systemd-logind[1420]: New session 40 of user core. Feb 13 20:20:53.822558 systemd[1]: Started session-40.scope - Session 40 of User core. Feb 13 20:20:53.929634 sshd[3381]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:53.933058 systemd[1]: sshd@39-10.0.0.10:22-10.0.0.1:49200.service: Deactivated successfully. Feb 13 20:20:53.934758 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 20:20:53.936266 systemd-logind[1420]: Session 40 logged out. Waiting for processes to exit. Feb 13 20:20:53.937659 systemd-logind[1420]: Removed session 40. Feb 13 20:20:57.488891 kubelet[2509]: E0213 20:20:57.488826 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:20:58.940332 systemd[1]: Started sshd@40-10.0.0.10:22-10.0.0.1:49216.service - OpenSSH per-connection server daemon (10.0.0.1:49216). Feb 13 20:20:58.978577 sshd[3396]: Accepted publickey for core from 10.0.0.1 port 49216 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:58.979799 sshd[3396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:58.983734 systemd-logind[1420]: New session 41 of user core. Feb 13 20:20:58.989573 systemd[1]: Started session-41.scope - Session 41 of User core. Feb 13 20:20:59.094866 sshd[3396]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:59.104938 systemd[1]: sshd@40-10.0.0.10:22-10.0.0.1:49216.service: Deactivated successfully. Feb 13 20:20:59.107165 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 20:20:59.108538 systemd-logind[1420]: Session 41 logged out. Waiting for processes to exit. Feb 13 20:20:59.117691 systemd[1]: Started sshd@41-10.0.0.10:22-10.0.0.1:49226.service - OpenSSH per-connection server daemon (10.0.0.1:49226). Feb 13 20:20:59.119179 systemd-logind[1420]: Removed session 41. Feb 13 20:20:59.157202 sshd[3411]: Accepted publickey for core from 10.0.0.1 port 49226 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:59.158333 sshd[3411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:59.163862 systemd-logind[1420]: New session 42 of user core. Feb 13 20:20:59.176625 systemd[1]: Started session-42.scope - Session 42 of User core. Feb 13 20:20:59.315922 sshd[3411]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:59.325309 systemd[1]: sshd@41-10.0.0.10:22-10.0.0.1:49226.service: Deactivated successfully. Feb 13 20:20:59.327223 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 20:20:59.329127 systemd-logind[1420]: Session 42 logged out. Waiting for processes to exit. Feb 13 20:20:59.337685 systemd[1]: Started sshd@42-10.0.0.10:22-10.0.0.1:49230.service - OpenSSH per-connection server daemon (10.0.0.1:49230). Feb 13 20:20:59.338645 systemd-logind[1420]: Removed session 42. Feb 13 20:20:59.371963 sshd[3423]: Accepted publickey for core from 10.0.0.1 port 49230 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:20:59.373104 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:59.376420 systemd-logind[1420]: New session 43 of user core. Feb 13 20:20:59.383623 systemd[1]: Started session-43.scope - Session 43 of User core. Feb 13 20:20:59.487295 sshd[3423]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:59.490445 systemd[1]: sshd@42-10.0.0.10:22-10.0.0.1:49230.service: Deactivated successfully. Feb 13 20:20:59.492605 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 20:20:59.493312 systemd-logind[1420]: Session 43 logged out. Waiting for processes to exit. Feb 13 20:20:59.494160 systemd-logind[1420]: Removed session 43. Feb 13 20:21:01.419983 kubelet[2509]: E0213 20:21:01.419940 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:01.422254 kubelet[2509]: E0213 20:21:01.421024 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:21:02.489532 kubelet[2509]: E0213 20:21:02.489495 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:04.506218 systemd[1]: Started sshd@43-10.0.0.10:22-10.0.0.1:58198.service - OpenSSH per-connection server daemon (10.0.0.1:58198). Feb 13 20:21:04.545004 sshd[3437]: Accepted publickey for core from 10.0.0.1 port 58198 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:04.546222 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:04.549839 systemd-logind[1420]: New session 44 of user core. Feb 13 20:21:04.560556 systemd[1]: Started session-44.scope - Session 44 of User core. Feb 13 20:21:04.665156 sshd[3437]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:04.668316 systemd[1]: sshd@43-10.0.0.10:22-10.0.0.1:58198.service: Deactivated successfully. Feb 13 20:21:04.671085 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 20:21:04.672292 systemd-logind[1420]: Session 44 logged out. Waiting for processes to exit. Feb 13 20:21:04.673561 systemd-logind[1420]: Removed session 44. Feb 13 20:21:07.491620 kubelet[2509]: E0213 20:21:07.491570 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:09.675291 systemd[1]: Started sshd@44-10.0.0.10:22-10.0.0.1:58214.service - OpenSSH per-connection server daemon (10.0.0.1:58214). Feb 13 20:21:09.711915 sshd[3452]: Accepted publickey for core from 10.0.0.1 port 58214 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:09.713045 sshd[3452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:09.716631 systemd-logind[1420]: New session 45 of user core. Feb 13 20:21:09.726560 systemd[1]: Started session-45.scope - Session 45 of User core. Feb 13 20:21:09.828654 sshd[3452]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:09.832007 systemd[1]: sshd@44-10.0.0.10:22-10.0.0.1:58214.service: Deactivated successfully. Feb 13 20:21:09.834249 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 20:21:09.834968 systemd-logind[1420]: Session 45 logged out. Waiting for processes to exit. Feb 13 20:21:09.835867 systemd-logind[1420]: Removed session 45. Feb 13 20:21:12.493033 kubelet[2509]: E0213 20:21:12.492980 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:14.839948 systemd[1]: Started sshd@45-10.0.0.10:22-10.0.0.1:60068.service - OpenSSH per-connection server daemon (10.0.0.1:60068). Feb 13 20:21:14.876342 sshd[3466]: Accepted publickey for core from 10.0.0.1 port 60068 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:14.877537 sshd[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:14.880932 systemd-logind[1420]: New session 46 of user core. Feb 13 20:21:14.888564 systemd[1]: Started session-46.scope - Session 46 of User core. Feb 13 20:21:14.991318 sshd[3466]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:14.994540 systemd[1]: sshd@45-10.0.0.10:22-10.0.0.1:60068.service: Deactivated successfully. Feb 13 20:21:14.996173 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 20:21:14.996951 systemd-logind[1420]: Session 46 logged out. Waiting for processes to exit. Feb 13 20:21:14.997944 systemd-logind[1420]: Removed session 46. Feb 13 20:21:15.420828 kubelet[2509]: E0213 20:21:15.419159 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:15.422703 kubelet[2509]: E0213 20:21:15.422665 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:21:16.418441 kubelet[2509]: E0213 20:21:16.418386 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:17.494301 kubelet[2509]: E0213 20:21:17.494263 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:19.418654 kubelet[2509]: E0213 20:21:19.418615 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:20.004905 systemd[1]: Started sshd@46-10.0.0.10:22-10.0.0.1:60076.service - OpenSSH per-connection server daemon (10.0.0.1:60076). Feb 13 20:21:20.041551 sshd[3482]: Accepted publickey for core from 10.0.0.1 port 60076 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:20.042819 sshd[3482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:20.046736 systemd-logind[1420]: New session 47 of user core. Feb 13 20:21:20.054582 systemd[1]: Started session-47.scope - Session 47 of User core. Feb 13 20:21:20.156563 sshd[3482]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:20.159740 systemd[1]: sshd@46-10.0.0.10:22-10.0.0.1:60076.service: Deactivated successfully. Feb 13 20:21:20.161352 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 20:21:20.162925 systemd-logind[1420]: Session 47 logged out. Waiting for processes to exit. Feb 13 20:21:20.163869 systemd-logind[1420]: Removed session 47. Feb 13 20:21:22.495485 kubelet[2509]: E0213 20:21:22.495438 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:25.170800 systemd[1]: Started sshd@47-10.0.0.10:22-10.0.0.1:52314.service - OpenSSH per-connection server daemon (10.0.0.1:52314). Feb 13 20:21:25.207247 sshd[3496]: Accepted publickey for core from 10.0.0.1 port 52314 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:25.208348 sshd[3496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:25.211763 systemd-logind[1420]: New session 48 of user core. Feb 13 20:21:25.217571 systemd[1]: Started session-48.scope - Session 48 of User core. Feb 13 20:21:25.318912 sshd[3496]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:25.322045 systemd[1]: sshd@47-10.0.0.10:22-10.0.0.1:52314.service: Deactivated successfully. Feb 13 20:21:25.323623 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 20:21:25.324204 systemd-logind[1420]: Session 48 logged out. Waiting for processes to exit. Feb 13 20:21:25.325090 systemd-logind[1420]: Removed session 48. Feb 13 20:21:27.496302 kubelet[2509]: E0213 20:21:27.496259 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:29.418317 kubelet[2509]: E0213 20:21:29.418132 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:29.418771 kubelet[2509]: E0213 20:21:29.418710 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:21:30.329867 systemd[1]: Started sshd@48-10.0.0.10:22-10.0.0.1:52316.service - OpenSSH per-connection server daemon (10.0.0.1:52316). Feb 13 20:21:30.366792 sshd[3511]: Accepted publickey for core from 10.0.0.1 port 52316 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:30.367969 sshd[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:30.371834 systemd-logind[1420]: New session 49 of user core. Feb 13 20:21:30.385581 systemd[1]: Started session-49.scope - Session 49 of User core. Feb 13 20:21:30.486872 sshd[3511]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:30.490153 systemd[1]: sshd@48-10.0.0.10:22-10.0.0.1:52316.service: Deactivated successfully. Feb 13 20:21:30.491715 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 20:21:30.493041 systemd-logind[1420]: Session 49 logged out. Waiting for processes to exit. Feb 13 20:21:30.493964 systemd-logind[1420]: Removed session 49. Feb 13 20:21:32.419274 kubelet[2509]: E0213 20:21:32.419196 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:32.497306 kubelet[2509]: E0213 20:21:32.497264 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:35.498877 systemd[1]: Started sshd@49-10.0.0.10:22-10.0.0.1:50848.service - OpenSSH per-connection server daemon (10.0.0.1:50848). Feb 13 20:21:35.535833 sshd[3527]: Accepted publickey for core from 10.0.0.1 port 50848 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:35.537013 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:35.540758 systemd-logind[1420]: New session 50 of user core. Feb 13 20:21:35.547650 systemd[1]: Started session-50.scope - Session 50 of User core. Feb 13 20:21:35.656863 sshd[3527]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:35.660131 systemd[1]: sshd@49-10.0.0.10:22-10.0.0.1:50848.service: Deactivated successfully. Feb 13 20:21:35.662469 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 20:21:35.663405 systemd-logind[1420]: Session 50 logged out. Waiting for processes to exit. Feb 13 20:21:35.664749 systemd-logind[1420]: Removed session 50. Feb 13 20:21:37.498216 kubelet[2509]: E0213 20:21:37.498167 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:40.668113 systemd[1]: Started sshd@50-10.0.0.10:22-10.0.0.1:50862.service - OpenSSH per-connection server daemon (10.0.0.1:50862). Feb 13 20:21:40.704999 sshd[3542]: Accepted publickey for core from 10.0.0.1 port 50862 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:40.706240 sshd[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:40.709665 systemd-logind[1420]: New session 51 of user core. Feb 13 20:21:40.715577 systemd[1]: Started session-51.scope - Session 51 of User core. Feb 13 20:21:40.820288 sshd[3542]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:40.823461 systemd[1]: sshd@50-10.0.0.10:22-10.0.0.1:50862.service: Deactivated successfully. Feb 13 20:21:40.825132 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 20:21:40.825729 systemd-logind[1420]: Session 51 logged out. Waiting for processes to exit. Feb 13 20:21:40.826552 systemd-logind[1420]: Removed session 51. Feb 13 20:21:41.419360 kubelet[2509]: E0213 20:21:41.419270 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:42.418638 kubelet[2509]: E0213 20:21:42.418600 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:42.419231 kubelet[2509]: E0213 20:21:42.419204 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:21:42.499003 kubelet[2509]: E0213 20:21:42.498956 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:45.838010 systemd[1]: Started sshd@51-10.0.0.10:22-10.0.0.1:56554.service - OpenSSH per-connection server daemon (10.0.0.1:56554). Feb 13 20:21:45.874783 sshd[3556]: Accepted publickey for core from 10.0.0.1 port 56554 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:45.875915 sshd[3556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:45.879240 systemd-logind[1420]: New session 52 of user core. Feb 13 20:21:45.891628 systemd[1]: Started session-52.scope - Session 52 of User core. Feb 13 20:21:45.996323 sshd[3556]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:45.999992 systemd[1]: sshd@51-10.0.0.10:22-10.0.0.1:56554.service: Deactivated successfully. Feb 13 20:21:46.001690 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 20:21:46.002288 systemd-logind[1420]: Session 52 logged out. Waiting for processes to exit. Feb 13 20:21:46.003225 systemd-logind[1420]: Removed session 52. Feb 13 20:21:47.500324 kubelet[2509]: E0213 20:21:47.500288 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:51.006956 systemd[1]: Started sshd@52-10.0.0.10:22-10.0.0.1:56566.service - OpenSSH per-connection server daemon (10.0.0.1:56566). Feb 13 20:21:51.043937 sshd[3572]: Accepted publickey for core from 10.0.0.1 port 56566 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:51.045078 sshd[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:51.048623 systemd-logind[1420]: New session 53 of user core. Feb 13 20:21:51.059564 systemd[1]: Started session-53.scope - Session 53 of User core. Feb 13 20:21:51.164401 sshd[3572]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:51.167645 systemd[1]: sshd@52-10.0.0.10:22-10.0.0.1:56566.service: Deactivated successfully. Feb 13 20:21:51.169275 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 20:21:51.170486 systemd-logind[1420]: Session 53 logged out. Waiting for processes to exit. Feb 13 20:21:51.171492 systemd-logind[1420]: Removed session 53. Feb 13 20:21:52.501197 kubelet[2509]: E0213 20:21:52.501156 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:21:53.418636 kubelet[2509]: E0213 20:21:53.418597 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:21:53.419320 kubelet[2509]: E0213 20:21:53.419242 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:21:56.175019 systemd[1]: Started sshd@53-10.0.0.10:22-10.0.0.1:55722.service - OpenSSH per-connection server daemon (10.0.0.1:55722). Feb 13 20:21:56.211722 sshd[3586]: Accepted publickey for core from 10.0.0.1 port 55722 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:21:56.212876 sshd[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:56.216364 systemd-logind[1420]: New session 54 of user core. Feb 13 20:21:56.224576 systemd[1]: Started session-54.scope - Session 54 of User core. Feb 13 20:21:56.328539 sshd[3586]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:56.332128 systemd[1]: sshd@53-10.0.0.10:22-10.0.0.1:55722.service: Deactivated successfully. Feb 13 20:21:56.334364 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 20:21:56.335360 systemd-logind[1420]: Session 54 logged out. Waiting for processes to exit. Feb 13 20:21:56.336245 systemd-logind[1420]: Removed session 54. Feb 13 20:21:57.502792 kubelet[2509]: E0213 20:21:57.502731 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:01.340090 systemd[1]: Started sshd@54-10.0.0.10:22-10.0.0.1:55730.service - OpenSSH per-connection server daemon (10.0.0.1:55730). Feb 13 20:22:01.376919 sshd[3601]: Accepted publickey for core from 10.0.0.1 port 55730 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:01.378096 sshd[3601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:01.381448 systemd-logind[1420]: New session 55 of user core. Feb 13 20:22:01.390578 systemd[1]: Started session-55.scope - Session 55 of User core. Feb 13 20:22:01.495084 sshd[3601]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:01.498395 systemd[1]: sshd@54-10.0.0.10:22-10.0.0.1:55730.service: Deactivated successfully. Feb 13 20:22:01.500094 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 20:22:01.501591 systemd-logind[1420]: Session 55 logged out. Waiting for processes to exit. Feb 13 20:22:01.502414 systemd-logind[1420]: Removed session 55. Feb 13 20:22:02.504124 kubelet[2509]: E0213 20:22:02.504088 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:05.418323 kubelet[2509]: E0213 20:22:05.418280 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:05.418980 kubelet[2509]: E0213 20:22:05.418949 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:22:06.507839 systemd[1]: Started sshd@55-10.0.0.10:22-10.0.0.1:46440.service - OpenSSH per-connection server daemon (10.0.0.1:46440). Feb 13 20:22:06.544257 sshd[3616]: Accepted publickey for core from 10.0.0.1 port 46440 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:06.545408 sshd[3616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:06.548825 systemd-logind[1420]: New session 56 of user core. Feb 13 20:22:06.555568 systemd[1]: Started session-56.scope - Session 56 of User core. Feb 13 20:22:06.660543 sshd[3616]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:06.663697 systemd[1]: sshd@55-10.0.0.10:22-10.0.0.1:46440.service: Deactivated successfully. Feb 13 20:22:06.666583 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 20:22:06.667660 systemd-logind[1420]: Session 56 logged out. Waiting for processes to exit. Feb 13 20:22:06.669013 systemd-logind[1420]: Removed session 56. Feb 13 20:22:07.504883 kubelet[2509]: E0213 20:22:07.504846 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:11.671113 systemd[1]: Started sshd@56-10.0.0.10:22-10.0.0.1:46442.service - OpenSSH per-connection server daemon (10.0.0.1:46442). Feb 13 20:22:11.707864 sshd[3630]: Accepted publickey for core from 10.0.0.1 port 46442 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:11.709004 sshd[3630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:11.712475 systemd-logind[1420]: New session 57 of user core. Feb 13 20:22:11.725634 systemd[1]: Started session-57.scope - Session 57 of User core. Feb 13 20:22:11.830639 sshd[3630]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:11.833802 systemd[1]: sshd@56-10.0.0.10:22-10.0.0.1:46442.service: Deactivated successfully. Feb 13 20:22:11.835493 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 20:22:11.836040 systemd-logind[1420]: Session 57 logged out. Waiting for processes to exit. Feb 13 20:22:11.836844 systemd-logind[1420]: Removed session 57. Feb 13 20:22:12.505930 kubelet[2509]: E0213 20:22:12.505879 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:16.840967 systemd[1]: Started sshd@57-10.0.0.10:22-10.0.0.1:43108.service - OpenSSH per-connection server daemon (10.0.0.1:43108). Feb 13 20:22:16.877816 sshd[3644]: Accepted publickey for core from 10.0.0.1 port 43108 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:16.879055 sshd[3644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:16.882979 systemd-logind[1420]: New session 58 of user core. Feb 13 20:22:16.892603 systemd[1]: Started session-58.scope - Session 58 of User core. Feb 13 20:22:16.996081 sshd[3644]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:16.999350 systemd[1]: sshd@57-10.0.0.10:22-10.0.0.1:43108.service: Deactivated successfully. Feb 13 20:22:17.000938 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 20:22:17.001482 systemd-logind[1420]: Session 58 logged out. Waiting for processes to exit. Feb 13 20:22:17.002193 systemd-logind[1420]: Removed session 58. Feb 13 20:22:17.506943 kubelet[2509]: E0213 20:22:17.506891 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:18.418894 kubelet[2509]: E0213 20:22:18.418854 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:18.419602 kubelet[2509]: E0213 20:22:18.419566 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:22:22.006995 systemd[1]: Started sshd@58-10.0.0.10:22-10.0.0.1:43120.service - OpenSSH per-connection server daemon (10.0.0.1:43120). Feb 13 20:22:22.044090 sshd[3660]: Accepted publickey for core from 10.0.0.1 port 43120 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:22.045406 sshd[3660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:22.048793 systemd-logind[1420]: New session 59 of user core. Feb 13 20:22:22.055562 systemd[1]: Started session-59.scope - Session 59 of User core. Feb 13 20:22:22.161581 sshd[3660]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:22.164891 systemd[1]: sshd@58-10.0.0.10:22-10.0.0.1:43120.service: Deactivated successfully. Feb 13 20:22:22.166896 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 20:22:22.168155 systemd-logind[1420]: Session 59 logged out. Waiting for processes to exit. Feb 13 20:22:22.168995 systemd-logind[1420]: Removed session 59. Feb 13 20:22:22.508479 kubelet[2509]: E0213 20:22:22.508445 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:27.172912 systemd[1]: Started sshd@59-10.0.0.10:22-10.0.0.1:35736.service - OpenSSH per-connection server daemon (10.0.0.1:35736). Feb 13 20:22:27.210148 sshd[3674]: Accepted publickey for core from 10.0.0.1 port 35736 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:27.211341 sshd[3674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:27.215178 systemd-logind[1420]: New session 60 of user core. Feb 13 20:22:27.224579 systemd[1]: Started session-60.scope - Session 60 of User core. Feb 13 20:22:27.331151 sshd[3674]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:27.334317 systemd[1]: sshd@59-10.0.0.10:22-10.0.0.1:35736.service: Deactivated successfully. Feb 13 20:22:27.336552 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 20:22:27.337271 systemd-logind[1420]: Session 60 logged out. Waiting for processes to exit. Feb 13 20:22:27.338268 systemd-logind[1420]: Removed session 60. Feb 13 20:22:27.509507 kubelet[2509]: E0213 20:22:27.509464 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:29.418442 kubelet[2509]: E0213 20:22:29.418367 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:29.419059 kubelet[2509]: E0213 20:22:29.419031 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:22:32.345064 systemd[1]: Started sshd@60-10.0.0.10:22-10.0.0.1:35752.service - OpenSSH per-connection server daemon (10.0.0.1:35752). Feb 13 20:22:32.381916 sshd[3689]: Accepted publickey for core from 10.0.0.1 port 35752 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:32.383084 sshd[3689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:32.386472 systemd-logind[1420]: New session 61 of user core. Feb 13 20:22:32.394640 systemd[1]: Started session-61.scope - Session 61 of User core. Feb 13 20:22:32.503639 sshd[3689]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:32.506777 systemd[1]: sshd@60-10.0.0.10:22-10.0.0.1:35752.service: Deactivated successfully. Feb 13 20:22:32.508906 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 20:22:32.509469 systemd-logind[1420]: Session 61 logged out. Waiting for processes to exit. Feb 13 20:22:32.510117 kubelet[2509]: E0213 20:22:32.510083 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:32.510751 systemd-logind[1420]: Removed session 61. Feb 13 20:22:37.511826 kubelet[2509]: E0213 20:22:37.511751 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:37.513953 systemd[1]: Started sshd@61-10.0.0.10:22-10.0.0.1:53902.service - OpenSSH per-connection server daemon (10.0.0.1:53902). Feb 13 20:22:37.550817 sshd[3706]: Accepted publickey for core from 10.0.0.1 port 53902 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:37.552011 sshd[3706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:37.556110 systemd-logind[1420]: New session 62 of user core. Feb 13 20:22:37.570577 systemd[1]: Started session-62.scope - Session 62 of User core. Feb 13 20:22:37.678087 sshd[3706]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:37.681388 systemd[1]: sshd@61-10.0.0.10:22-10.0.0.1:53902.service: Deactivated successfully. Feb 13 20:22:37.683103 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 20:22:37.684399 systemd-logind[1420]: Session 62 logged out. Waiting for processes to exit. Feb 13 20:22:37.685307 systemd-logind[1420]: Removed session 62. Feb 13 20:22:38.418467 kubelet[2509]: E0213 20:22:38.418232 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:40.418432 kubelet[2509]: E0213 20:22:40.418305 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:40.419341 kubelet[2509]: E0213 20:22:40.419145 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:22:42.513013 kubelet[2509]: E0213 20:22:42.512976 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:42.690119 systemd[1]: Started sshd@62-10.0.0.10:22-10.0.0.1:46530.service - OpenSSH per-connection server daemon (10.0.0.1:46530). Feb 13 20:22:42.727123 sshd[3723]: Accepted publickey for core from 10.0.0.1 port 46530 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:42.728456 sshd[3723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:42.733400 systemd-logind[1420]: New session 63 of user core. Feb 13 20:22:42.745645 systemd[1]: Started session-63.scope - Session 63 of User core. Feb 13 20:22:42.850365 sshd[3723]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:42.853975 systemd[1]: sshd@62-10.0.0.10:22-10.0.0.1:46530.service: Deactivated successfully. Feb 13 20:22:42.855743 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 20:22:42.856389 systemd-logind[1420]: Session 63 logged out. Waiting for processes to exit. Feb 13 20:22:42.857182 systemd-logind[1420]: Removed session 63. Feb 13 20:22:47.514217 kubelet[2509]: E0213 20:22:47.514171 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:47.861004 systemd[1]: Started sshd@63-10.0.0.10:22-10.0.0.1:46540.service - OpenSSH per-connection server daemon (10.0.0.1:46540). Feb 13 20:22:47.897816 sshd[3738]: Accepted publickey for core from 10.0.0.1 port 46540 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:47.899115 sshd[3738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:47.902776 systemd-logind[1420]: New session 64 of user core. Feb 13 20:22:47.916575 systemd[1]: Started session-64.scope - Session 64 of User core. Feb 13 20:22:48.020025 sshd[3738]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:48.023282 systemd[1]: sshd@63-10.0.0.10:22-10.0.0.1:46540.service: Deactivated successfully. Feb 13 20:22:48.025002 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 20:22:48.026080 systemd-logind[1420]: Session 64 logged out. Waiting for processes to exit. Feb 13 20:22:48.026988 systemd-logind[1420]: Removed session 64. Feb 13 20:22:48.418628 kubelet[2509]: E0213 20:22:48.418533 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:50.418249 kubelet[2509]: E0213 20:22:50.418198 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:52.418981 kubelet[2509]: E0213 20:22:52.418864 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:22:52.420337 kubelet[2509]: E0213 20:22:52.419933 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:22:52.514991 kubelet[2509]: E0213 20:22:52.514946 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:53.030891 systemd[1]: Started sshd@64-10.0.0.10:22-10.0.0.1:57832.service - OpenSSH per-connection server daemon (10.0.0.1:57832). Feb 13 20:22:53.067888 sshd[3755]: Accepted publickey for core from 10.0.0.1 port 57832 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:53.069050 sshd[3755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:53.072982 systemd-logind[1420]: New session 65 of user core. Feb 13 20:22:53.082555 systemd[1]: Started session-65.scope - Session 65 of User core. Feb 13 20:22:53.185376 sshd[3755]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:53.189071 systemd[1]: sshd@64-10.0.0.10:22-10.0.0.1:57832.service: Deactivated successfully. Feb 13 20:22:53.190882 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 20:22:53.191476 systemd-logind[1420]: Session 65 logged out. Waiting for processes to exit. Feb 13 20:22:53.192268 systemd-logind[1420]: Removed session 65. Feb 13 20:22:57.516105 kubelet[2509]: E0213 20:22:57.516037 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:22:58.196937 systemd[1]: Started sshd@65-10.0.0.10:22-10.0.0.1:57840.service - OpenSSH per-connection server daemon (10.0.0.1:57840). Feb 13 20:22:58.233810 sshd[3770]: Accepted publickey for core from 10.0.0.1 port 57840 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:22:58.235014 sshd[3770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:58.238370 systemd-logind[1420]: New session 66 of user core. Feb 13 20:22:58.246575 systemd[1]: Started session-66.scope - Session 66 of User core. Feb 13 20:22:58.351276 sshd[3770]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:58.354562 systemd[1]: sshd@65-10.0.0.10:22-10.0.0.1:57840.service: Deactivated successfully. Feb 13 20:22:58.356365 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 20:22:58.357675 systemd-logind[1420]: Session 66 logged out. Waiting for processes to exit. Feb 13 20:22:58.358553 systemd-logind[1420]: Removed session 66. Feb 13 20:23:02.516978 kubelet[2509]: E0213 20:23:02.516891 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:03.362056 systemd[1]: Started sshd@66-10.0.0.10:22-10.0.0.1:50584.service - OpenSSH per-connection server daemon (10.0.0.1:50584). Feb 13 20:23:03.398671 sshd[3784]: Accepted publickey for core from 10.0.0.1 port 50584 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:03.399927 sshd[3784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:03.403478 systemd-logind[1420]: New session 67 of user core. Feb 13 20:23:03.412656 systemd[1]: Started session-67.scope - Session 67 of User core. Feb 13 20:23:03.516008 sshd[3784]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:03.519123 systemd[1]: sshd@66-10.0.0.10:22-10.0.0.1:50584.service: Deactivated successfully. Feb 13 20:23:03.520744 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 20:23:03.521866 systemd-logind[1420]: Session 67 logged out. Waiting for processes to exit. Feb 13 20:23:03.523527 systemd-logind[1420]: Removed session 67. Feb 13 20:23:05.418771 kubelet[2509]: E0213 20:23:05.418656 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:05.419322 kubelet[2509]: E0213 20:23:05.419286 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:23:07.518602 kubelet[2509]: E0213 20:23:07.518558 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:08.526959 systemd[1]: Started sshd@67-10.0.0.10:22-10.0.0.1:50592.service - OpenSSH per-connection server daemon (10.0.0.1:50592). Feb 13 20:23:08.563753 sshd[3799]: Accepted publickey for core from 10.0.0.1 port 50592 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:08.564900 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:08.568166 systemd-logind[1420]: New session 68 of user core. Feb 13 20:23:08.575562 systemd[1]: Started session-68.scope - Session 68 of User core. Feb 13 20:23:08.678294 sshd[3799]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:08.681544 systemd[1]: sshd@67-10.0.0.10:22-10.0.0.1:50592.service: Deactivated successfully. Feb 13 20:23:08.683083 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 20:23:08.684319 systemd-logind[1420]: Session 68 logged out. Waiting for processes to exit. Feb 13 20:23:08.685247 systemd-logind[1420]: Removed session 68. Feb 13 20:23:10.419337 kubelet[2509]: E0213 20:23:10.419264 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:12.519799 kubelet[2509]: E0213 20:23:12.519762 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:13.688880 systemd[1]: Started sshd@68-10.0.0.10:22-10.0.0.1:37228.service - OpenSSH per-connection server daemon (10.0.0.1:37228). Feb 13 20:23:13.725676 sshd[3815]: Accepted publickey for core from 10.0.0.1 port 37228 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:13.726812 sshd[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:13.730500 systemd-logind[1420]: New session 69 of user core. Feb 13 20:23:13.753573 systemd[1]: Started session-69.scope - Session 69 of User core. Feb 13 20:23:13.855728 sshd[3815]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:13.858928 systemd[1]: sshd@68-10.0.0.10:22-10.0.0.1:37228.service: Deactivated successfully. Feb 13 20:23:13.860604 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 20:23:13.861823 systemd-logind[1420]: Session 69 logged out. Waiting for processes to exit. Feb 13 20:23:13.862711 systemd-logind[1420]: Removed session 69. Feb 13 20:23:17.418443 kubelet[2509]: E0213 20:23:17.418387 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:17.418968 kubelet[2509]: E0213 20:23:17.418927 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:23:17.521236 kubelet[2509]: E0213 20:23:17.521207 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:18.870224 systemd[1]: Started sshd@69-10.0.0.10:22-10.0.0.1:37236.service - OpenSSH per-connection server daemon (10.0.0.1:37236). Feb 13 20:23:18.906771 sshd[3832]: Accepted publickey for core from 10.0.0.1 port 37236 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:18.907916 sshd[3832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:18.911492 systemd-logind[1420]: New session 70 of user core. Feb 13 20:23:18.922601 systemd[1]: Started session-70.scope - Session 70 of User core. Feb 13 20:23:19.024506 sshd[3832]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:19.027679 systemd[1]: sshd@69-10.0.0.10:22-10.0.0.1:37236.service: Deactivated successfully. Feb 13 20:23:19.029297 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 20:23:19.030719 systemd-logind[1420]: Session 70 logged out. Waiting for processes to exit. Feb 13 20:23:19.032167 systemd-logind[1420]: Removed session 70. Feb 13 20:23:22.522093 kubelet[2509]: E0213 20:23:22.522054 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:24.035268 systemd[1]: Started sshd@70-10.0.0.10:22-10.0.0.1:55666.service - OpenSSH per-connection server daemon (10.0.0.1:55666). Feb 13 20:23:24.071890 sshd[3847]: Accepted publickey for core from 10.0.0.1 port 55666 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:24.073080 sshd[3847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:24.076931 systemd-logind[1420]: New session 71 of user core. Feb 13 20:23:24.088642 systemd[1]: Started session-71.scope - Session 71 of User core. Feb 13 20:23:24.192876 sshd[3847]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:24.196294 systemd[1]: sshd@70-10.0.0.10:22-10.0.0.1:55666.service: Deactivated successfully. Feb 13 20:23:24.198022 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 20:23:24.199262 systemd-logind[1420]: Session 71 logged out. Waiting for processes to exit. Feb 13 20:23:24.200079 systemd-logind[1420]: Removed session 71. Feb 13 20:23:27.523386 kubelet[2509]: E0213 20:23:27.523331 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:29.203060 systemd[1]: Started sshd@71-10.0.0.10:22-10.0.0.1:55676.service - OpenSSH per-connection server daemon (10.0.0.1:55676). Feb 13 20:23:29.239906 sshd[3861]: Accepted publickey for core from 10.0.0.1 port 55676 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:29.241061 sshd[3861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:29.244556 systemd-logind[1420]: New session 72 of user core. Feb 13 20:23:29.254626 systemd[1]: Started session-72.scope - Session 72 of User core. Feb 13 20:23:29.359222 sshd[3861]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:29.362478 systemd[1]: sshd@71-10.0.0.10:22-10.0.0.1:55676.service: Deactivated successfully. Feb 13 20:23:29.364194 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 20:23:29.364796 systemd-logind[1420]: Session 72 logged out. Waiting for processes to exit. Feb 13 20:23:29.365616 systemd-logind[1420]: Removed session 72. Feb 13 20:23:32.418690 kubelet[2509]: E0213 20:23:32.418589 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:32.419853 containerd[1437]: time="2025-02-13T20:23:32.419706102Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:23:32.524478 kubelet[2509]: E0213 20:23:32.524414 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:33.529765 containerd[1437]: time="2025-02-13T20:23:33.529700112Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:23:33.530124 containerd[1437]: time="2025-02-13T20:23:33.529783875Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:23:33.530155 kubelet[2509]: E0213 20:23:33.529936 2509 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:23:33.530155 kubelet[2509]: E0213 20:23:33.529990 2509 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:23:33.530381 kubelet[2509]: E0213 20:23:33.530079 2509 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcvvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-hvbks_kube-flannel(db3e2d2a-fd5b-4cce-b6ee-04e217b474ca): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:23:33.530453 kubelet[2509]: E0213 20:23:33.530110 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:23:34.369930 systemd[1]: Started sshd@72-10.0.0.10:22-10.0.0.1:55758.service - OpenSSH per-connection server daemon (10.0.0.1:55758). Feb 13 20:23:34.406655 sshd[3878]: Accepted publickey for core from 10.0.0.1 port 55758 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:34.407829 sshd[3878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:34.411364 systemd-logind[1420]: New session 73 of user core. Feb 13 20:23:34.419695 systemd[1]: Started session-73.scope - Session 73 of User core. Feb 13 20:23:34.525553 sshd[3878]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:34.528866 systemd[1]: sshd@72-10.0.0.10:22-10.0.0.1:55758.service: Deactivated successfully. Feb 13 20:23:34.530687 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 20:23:34.531295 systemd-logind[1420]: Session 73 logged out. Waiting for processes to exit. Feb 13 20:23:34.532070 systemd-logind[1420]: Removed session 73. Feb 13 20:23:37.525384 kubelet[2509]: E0213 20:23:37.525284 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:39.535932 systemd[1]: Started sshd@73-10.0.0.10:22-10.0.0.1:55762.service - OpenSSH per-connection server daemon (10.0.0.1:55762). Feb 13 20:23:39.572626 sshd[3893]: Accepted publickey for core from 10.0.0.1 port 55762 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:39.573796 sshd[3893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:39.577054 systemd-logind[1420]: New session 74 of user core. Feb 13 20:23:39.582563 systemd[1]: Started session-74.scope - Session 74 of User core. Feb 13 20:23:39.686581 sshd[3893]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:39.689737 systemd[1]: sshd@73-10.0.0.10:22-10.0.0.1:55762.service: Deactivated successfully. Feb 13 20:23:39.691370 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 20:23:39.692029 systemd-logind[1420]: Session 74 logged out. Waiting for processes to exit. Feb 13 20:23:39.693024 systemd-logind[1420]: Removed session 74. Feb 13 20:23:42.525866 kubelet[2509]: E0213 20:23:42.525815 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:44.701013 systemd[1]: Started sshd@74-10.0.0.10:22-10.0.0.1:58652.service - OpenSSH per-connection server daemon (10.0.0.1:58652). Feb 13 20:23:44.737358 sshd[3908]: Accepted publickey for core from 10.0.0.1 port 58652 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:44.738531 sshd[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:44.742628 systemd-logind[1420]: New session 75 of user core. Feb 13 20:23:44.747646 systemd[1]: Started session-75.scope - Session 75 of User core. Feb 13 20:23:44.852806 sshd[3908]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:44.856003 systemd[1]: sshd@74-10.0.0.10:22-10.0.0.1:58652.service: Deactivated successfully. Feb 13 20:23:44.857682 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 20:23:44.858227 systemd-logind[1420]: Session 75 logged out. Waiting for processes to exit. Feb 13 20:23:44.858900 systemd-logind[1420]: Removed session 75. Feb 13 20:23:47.418034 kubelet[2509]: E0213 20:23:47.417998 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:47.418935 kubelet[2509]: E0213 20:23:47.418632 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:23:47.526651 kubelet[2509]: E0213 20:23:47.526617 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:49.863811 systemd[1]: Started sshd@75-10.0.0.10:22-10.0.0.1:58668.service - OpenSSH per-connection server daemon (10.0.0.1:58668). Feb 13 20:23:49.900313 sshd[3925]: Accepted publickey for core from 10.0.0.1 port 58668 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:49.901525 sshd[3925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:49.904936 systemd-logind[1420]: New session 76 of user core. Feb 13 20:23:49.912633 systemd[1]: Started session-76.scope - Session 76 of User core. Feb 13 20:23:50.014598 sshd[3925]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:50.017760 systemd[1]: sshd@75-10.0.0.10:22-10.0.0.1:58668.service: Deactivated successfully. Feb 13 20:23:50.019820 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 20:23:50.020451 systemd-logind[1420]: Session 76 logged out. Waiting for processes to exit. Feb 13 20:23:50.021536 systemd-logind[1420]: Removed session 76. Feb 13 20:23:50.107755 update_engine[1424]: I20250213 20:23:50.107687 1424 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 20:23:50.107755 update_engine[1424]: I20250213 20:23:50.107750 1424 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 20:23:50.108055 update_engine[1424]: I20250213 20:23:50.107996 1424 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 20:23:50.108390 update_engine[1424]: I20250213 20:23:50.108354 1424 omaha_request_params.cc:62] Current group set to lts Feb 13 20:23:50.108589 update_engine[1424]: I20250213 20:23:50.108557 1424 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 20:23:50.108589 update_engine[1424]: I20250213 20:23:50.108585 1424 update_attempter.cc:643] Scheduling an action processor start. Feb 13 20:23:50.108649 update_engine[1424]: I20250213 20:23:50.108603 1424 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:23:50.108649 update_engine[1424]: I20250213 20:23:50.108629 1424 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 20:23:50.108709 locksmithd[1465]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 20:23:50.108883 update_engine[1424]: I20250213 20:23:50.108686 1424 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:23:50.108883 update_engine[1424]: I20250213 20:23:50.108696 1424 omaha_request_action.cc:272] Request: Feb 13 20:23:50.108883 update_engine[1424]: Feb 13 20:23:50.108883 update_engine[1424]: Feb 13 20:23:50.108883 update_engine[1424]: Feb 13 20:23:50.108883 update_engine[1424]: Feb 13 20:23:50.108883 update_engine[1424]: Feb 13 20:23:50.108883 update_engine[1424]: Feb 13 20:23:50.108883 update_engine[1424]: Feb 13 20:23:50.108883 update_engine[1424]: Feb 13 20:23:50.108883 update_engine[1424]: I20250213 20:23:50.108702 1424 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:23:50.109727 update_engine[1424]: I20250213 20:23:50.109694 1424 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:23:50.109946 update_engine[1424]: I20250213 20:23:50.109915 1424 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:23:50.148592 update_engine[1424]: E20250213 20:23:50.148476 1424 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:23:50.148592 update_engine[1424]: I20250213 20:23:50.148543 1424 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 20:23:52.527997 kubelet[2509]: E0213 20:23:52.527951 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:55.024825 systemd[1]: Started sshd@76-10.0.0.10:22-10.0.0.1:33884.service - OpenSSH per-connection server daemon (10.0.0.1:33884). Feb 13 20:23:55.061494 sshd[3939]: Accepted publickey for core from 10.0.0.1 port 33884 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:23:55.062641 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:55.065872 systemd-logind[1420]: New session 77 of user core. Feb 13 20:23:55.075611 systemd[1]: Started session-77.scope - Session 77 of User core. Feb 13 20:23:55.178141 sshd[3939]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:55.181293 systemd[1]: sshd@76-10.0.0.10:22-10.0.0.1:33884.service: Deactivated successfully. Feb 13 20:23:55.183996 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 20:23:55.184813 systemd-logind[1420]: Session 77 logged out. Waiting for processes to exit. Feb 13 20:23:55.185756 systemd-logind[1420]: Removed session 77. Feb 13 20:23:56.419026 kubelet[2509]: E0213 20:23:56.418941 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:57.529526 kubelet[2509]: E0213 20:23:57.529472 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:59.418092 kubelet[2509]: E0213 20:23:59.418055 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:23:59.419047 kubelet[2509]: E0213 20:23:59.418841 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:24:00.113132 update_engine[1424]: I20250213 20:24:00.113025 1424 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:24:00.113586 update_engine[1424]: I20250213 20:24:00.113332 1424 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:24:00.113586 update_engine[1424]: I20250213 20:24:00.113516 1424 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:24:00.125060 update_engine[1424]: E20250213 20:24:00.125020 1424 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:24:00.125141 update_engine[1424]: I20250213 20:24:00.125074 1424 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 20:24:00.197082 systemd[1]: Started sshd@77-10.0.0.10:22-10.0.0.1:33900.service - OpenSSH per-connection server daemon (10.0.0.1:33900). Feb 13 20:24:00.233723 sshd[3954]: Accepted publickey for core from 10.0.0.1 port 33900 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:00.234859 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:00.238591 systemd-logind[1420]: New session 78 of user core. Feb 13 20:24:00.245619 systemd[1]: Started session-78.scope - Session 78 of User core. Feb 13 20:24:00.347198 sshd[3954]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:00.359964 systemd[1]: sshd@77-10.0.0.10:22-10.0.0.1:33900.service: Deactivated successfully. Feb 13 20:24:00.361685 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 20:24:00.362923 systemd-logind[1420]: Session 78 logged out. Waiting for processes to exit. Feb 13 20:24:00.364138 systemd[1]: Started sshd@78-10.0.0.10:22-10.0.0.1:33916.service - OpenSSH per-connection server daemon (10.0.0.1:33916). Feb 13 20:24:00.365411 systemd-logind[1420]: Removed session 78. Feb 13 20:24:00.400808 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 33916 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:00.401934 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:00.405598 systemd-logind[1420]: New session 79 of user core. Feb 13 20:24:00.410585 systemd[1]: Started session-79.scope - Session 79 of User core. Feb 13 20:24:00.596046 sshd[3969]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:00.607803 systemd[1]: sshd@78-10.0.0.10:22-10.0.0.1:33916.service: Deactivated successfully. Feb 13 20:24:00.610050 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 20:24:00.611758 systemd-logind[1420]: Session 79 logged out. Waiting for processes to exit. Feb 13 20:24:00.613326 systemd[1]: Started sshd@79-10.0.0.10:22-10.0.0.1:33926.service - OpenSSH per-connection server daemon (10.0.0.1:33926). Feb 13 20:24:00.614505 systemd-logind[1420]: Removed session 79. Feb 13 20:24:00.650893 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 33926 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:00.652032 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:00.655459 systemd-logind[1420]: New session 80 of user core. Feb 13 20:24:00.663579 systemd[1]: Started session-80.scope - Session 80 of User core. Feb 13 20:24:01.728273 sshd[3981]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:01.739085 systemd[1]: sshd@79-10.0.0.10:22-10.0.0.1:33926.service: Deactivated successfully. Feb 13 20:24:01.743970 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 20:24:01.746064 systemd-logind[1420]: Session 80 logged out. Waiting for processes to exit. Feb 13 20:24:01.755707 systemd[1]: Started sshd@80-10.0.0.10:22-10.0.0.1:33932.service - OpenSSH per-connection server daemon (10.0.0.1:33932). Feb 13 20:24:01.756612 systemd-logind[1420]: Removed session 80. Feb 13 20:24:01.790258 sshd[4004]: Accepted publickey for core from 10.0.0.1 port 33932 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:01.791554 sshd[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:01.795507 systemd-logind[1420]: New session 81 of user core. Feb 13 20:24:01.806582 systemd[1]: Started session-81.scope - Session 81 of User core. Feb 13 20:24:02.001758 sshd[4004]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:02.008604 systemd[1]: sshd@80-10.0.0.10:22-10.0.0.1:33932.service: Deactivated successfully. Feb 13 20:24:02.009989 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 20:24:02.011587 systemd-logind[1420]: Session 81 logged out. Waiting for processes to exit. Feb 13 20:24:02.018668 systemd[1]: Started sshd@81-10.0.0.10:22-10.0.0.1:33940.service - OpenSSH per-connection server daemon (10.0.0.1:33940). Feb 13 20:24:02.019813 systemd-logind[1420]: Removed session 81. Feb 13 20:24:02.051997 sshd[4017]: Accepted publickey for core from 10.0.0.1 port 33940 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:02.053150 sshd[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:02.056567 systemd-logind[1420]: New session 82 of user core. Feb 13 20:24:02.063625 systemd[1]: Started session-82.scope - Session 82 of User core. Feb 13 20:24:02.166040 sshd[4017]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:02.169362 systemd[1]: sshd@81-10.0.0.10:22-10.0.0.1:33940.service: Deactivated successfully. Feb 13 20:24:02.170964 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 20:24:02.171559 systemd-logind[1420]: Session 82 logged out. Waiting for processes to exit. Feb 13 20:24:02.172340 systemd-logind[1420]: Removed session 82. Feb 13 20:24:02.530493 kubelet[2509]: E0213 20:24:02.530452 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:07.176924 systemd[1]: Started sshd@82-10.0.0.10:22-10.0.0.1:37418.service - OpenSSH per-connection server daemon (10.0.0.1:37418). Feb 13 20:24:07.213506 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 37418 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:07.214642 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:07.217937 systemd-logind[1420]: New session 83 of user core. Feb 13 20:24:07.229610 systemd[1]: Started session-83.scope - Session 83 of User core. Feb 13 20:24:07.332565 sshd[4032]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:07.335752 systemd[1]: sshd@82-10.0.0.10:22-10.0.0.1:37418.service: Deactivated successfully. Feb 13 20:24:07.337734 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 20:24:07.338369 systemd-logind[1420]: Session 83 logged out. Waiting for processes to exit. Feb 13 20:24:07.339132 systemd-logind[1420]: Removed session 83. Feb 13 20:24:07.531446 kubelet[2509]: E0213 20:24:07.531386 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:10.110585 update_engine[1424]: I20250213 20:24:10.110508 1424 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:24:10.110926 update_engine[1424]: I20250213 20:24:10.110774 1424 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:24:10.110952 update_engine[1424]: I20250213 20:24:10.110939 1424 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:24:10.116294 update_engine[1424]: E20250213 20:24:10.116250 1424 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:24:10.116347 update_engine[1424]: I20250213 20:24:10.116304 1424 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 20:24:10.418746 kubelet[2509]: E0213 20:24:10.418632 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:10.419522 kubelet[2509]: E0213 20:24:10.419459 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:24:12.343797 systemd[1]: Started sshd@83-10.0.0.10:22-10.0.0.1:37426.service - OpenSSH per-connection server daemon (10.0.0.1:37426). Feb 13 20:24:12.380544 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 37426 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:12.381698 sshd[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:12.385166 systemd-logind[1420]: New session 84 of user core. Feb 13 20:24:12.395573 systemd[1]: Started session-84.scope - Session 84 of User core. Feb 13 20:24:12.497667 sshd[4046]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:12.500831 systemd[1]: sshd@83-10.0.0.10:22-10.0.0.1:37426.service: Deactivated successfully. Feb 13 20:24:12.503990 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 20:24:12.504543 systemd-logind[1420]: Session 84 logged out. Waiting for processes to exit. Feb 13 20:24:12.505294 systemd-logind[1420]: Removed session 84. Feb 13 20:24:12.532908 kubelet[2509]: E0213 20:24:12.532865 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:13.418805 kubelet[2509]: E0213 20:24:13.418770 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:17.512562 systemd[1]: Started sshd@84-10.0.0.10:22-10.0.0.1:58352.service - OpenSSH per-connection server daemon (10.0.0.1:58352). Feb 13 20:24:17.534102 kubelet[2509]: E0213 20:24:17.534063 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:17.549077 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 58352 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:17.550219 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:17.553415 systemd-logind[1420]: New session 85 of user core. Feb 13 20:24:17.559639 systemd[1]: Started session-85.scope - Session 85 of User core. Feb 13 20:24:17.661562 sshd[4061]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:17.664792 systemd[1]: sshd@84-10.0.0.10:22-10.0.0.1:58352.service: Deactivated successfully. Feb 13 20:24:17.666355 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 20:24:17.666984 systemd-logind[1420]: Session 85 logged out. Waiting for processes to exit. Feb 13 20:24:17.667792 systemd-logind[1420]: Removed session 85. Feb 13 20:24:20.110168 update_engine[1424]: I20250213 20:24:20.110081 1424 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:24:20.110586 update_engine[1424]: I20250213 20:24:20.110345 1424 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:24:20.110586 update_engine[1424]: I20250213 20:24:20.110548 1424 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:24:20.115500 update_engine[1424]: E20250213 20:24:20.115459 1424 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:24:20.115545 update_engine[1424]: I20250213 20:24:20.115514 1424 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:24:20.115545 update_engine[1424]: I20250213 20:24:20.115525 1424 omaha_request_action.cc:617] Omaha request response: Feb 13 20:24:20.115624 update_engine[1424]: E20250213 20:24:20.115600 1424 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 20:24:20.115671 update_engine[1424]: I20250213 20:24:20.115656 1424 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 20:24:20.115695 update_engine[1424]: I20250213 20:24:20.115669 1424 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:24:20.115695 update_engine[1424]: I20250213 20:24:20.115674 1424 update_attempter.cc:306] Processing Done. Feb 13 20:24:20.115695 update_engine[1424]: E20250213 20:24:20.115689 1424 update_attempter.cc:619] Update failed. Feb 13 20:24:20.115695 update_engine[1424]: I20250213 20:24:20.115693 1424 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 20:24:20.115765 update_engine[1424]: I20250213 20:24:20.115698 1424 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 20:24:20.115765 update_engine[1424]: I20250213 20:24:20.115705 1424 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 20:24:20.115805 update_engine[1424]: I20250213 20:24:20.115765 1424 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:24:20.115805 update_engine[1424]: I20250213 20:24:20.115785 1424 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:24:20.115805 update_engine[1424]: I20250213 20:24:20.115792 1424 omaha_request_action.cc:272] Request: Feb 13 20:24:20.115805 update_engine[1424]: Feb 13 20:24:20.115805 update_engine[1424]: Feb 13 20:24:20.115805 update_engine[1424]: Feb 13 20:24:20.115805 update_engine[1424]: Feb 13 20:24:20.115805 update_engine[1424]: Feb 13 20:24:20.115805 update_engine[1424]: Feb 13 20:24:20.115805 update_engine[1424]: I20250213 20:24:20.115796 1424 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:24:20.115986 update_engine[1424]: I20250213 20:24:20.115936 1424 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:24:20.116048 locksmithd[1465]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 20:24:20.116233 update_engine[1424]: I20250213 20:24:20.116052 1424 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:24:20.123636 update_engine[1424]: E20250213 20:24:20.123601 1424 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:24:20.123687 update_engine[1424]: I20250213 20:24:20.123649 1424 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:24:20.123687 update_engine[1424]: I20250213 20:24:20.123656 1424 omaha_request_action.cc:617] Omaha request response: Feb 13 20:24:20.123687 update_engine[1424]: I20250213 20:24:20.123662 1424 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:24:20.123687 update_engine[1424]: I20250213 20:24:20.123667 1424 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:24:20.123687 update_engine[1424]: I20250213 20:24:20.123672 1424 update_attempter.cc:306] Processing Done. Feb 13 20:24:20.123687 update_engine[1424]: I20250213 20:24:20.123677 1424 update_attempter.cc:310] Error event sent. Feb 13 20:24:20.123687 update_engine[1424]: I20250213 20:24:20.123684 1424 update_check_scheduler.cc:74] Next update check in 41m28s Feb 13 20:24:20.123954 locksmithd[1465]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 20:24:20.418386 kubelet[2509]: E0213 20:24:20.418282 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:22.535398 kubelet[2509]: E0213 20:24:22.535354 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:22.671975 systemd[1]: Started sshd@85-10.0.0.10:22-10.0.0.1:55294.service - OpenSSH per-connection server daemon (10.0.0.1:55294). Feb 13 20:24:22.709165 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 55294 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:22.710465 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:22.714635 systemd-logind[1420]: New session 86 of user core. Feb 13 20:24:22.725621 systemd[1]: Started session-86.scope - Session 86 of User core. Feb 13 20:24:22.826601 sshd[4078]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:22.829321 systemd[1]: sshd@85-10.0.0.10:22-10.0.0.1:55294.service: Deactivated successfully. Feb 13 20:24:22.831214 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 20:24:22.832634 systemd-logind[1420]: Session 86 logged out. Waiting for processes to exit. Feb 13 20:24:22.833780 systemd-logind[1420]: Removed session 86. Feb 13 20:24:24.418968 kubelet[2509]: E0213 20:24:24.418862 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:24.419575 kubelet[2509]: E0213 20:24:24.419535 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:24:27.536539 kubelet[2509]: E0213 20:24:27.536488 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:27.836955 systemd[1]: Started sshd@86-10.0.0.10:22-10.0.0.1:55304.service - OpenSSH per-connection server daemon (10.0.0.1:55304). Feb 13 20:24:27.873581 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 55304 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:27.874780 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:27.878475 systemd-logind[1420]: New session 87 of user core. Feb 13 20:24:27.890670 systemd[1]: Started session-87.scope - Session 87 of User core. Feb 13 20:24:27.992183 sshd[4094]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:27.995193 systemd[1]: sshd@86-10.0.0.10:22-10.0.0.1:55304.service: Deactivated successfully. Feb 13 20:24:27.997403 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 20:24:27.998381 systemd-logind[1420]: Session 87 logged out. Waiting for processes to exit. Feb 13 20:24:27.999259 systemd-logind[1420]: Removed session 87. Feb 13 20:24:30.419343 kubelet[2509]: E0213 20:24:30.418860 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:32.537306 kubelet[2509]: E0213 20:24:32.537258 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:33.005095 systemd[1]: Started sshd@87-10.0.0.10:22-10.0.0.1:44394.service - OpenSSH per-connection server daemon (10.0.0.1:44394). Feb 13 20:24:33.041900 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 44394 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:33.043055 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:33.047154 systemd-logind[1420]: New session 88 of user core. Feb 13 20:24:33.053594 systemd[1]: Started session-88.scope - Session 88 of User core. Feb 13 20:24:33.154869 sshd[4111]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:33.158736 systemd[1]: sshd@87-10.0.0.10:22-10.0.0.1:44394.service: Deactivated successfully. Feb 13 20:24:33.160944 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 20:24:33.161520 systemd-logind[1420]: Session 88 logged out. Waiting for processes to exit. Feb 13 20:24:33.162312 systemd-logind[1420]: Removed session 88. Feb 13 20:24:37.538158 kubelet[2509]: E0213 20:24:37.538114 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:38.165764 systemd[1]: Started sshd@88-10.0.0.10:22-10.0.0.1:44398.service - OpenSSH per-connection server daemon (10.0.0.1:44398). Feb 13 20:24:38.202905 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 44398 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:38.204076 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:38.207874 systemd-logind[1420]: New session 89 of user core. Feb 13 20:24:38.218580 systemd[1]: Started session-89.scope - Session 89 of User core. Feb 13 20:24:38.319978 sshd[4126]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:38.323240 systemd[1]: sshd@88-10.0.0.10:22-10.0.0.1:44398.service: Deactivated successfully. Feb 13 20:24:38.325783 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 20:24:38.327001 systemd-logind[1420]: Session 89 logged out. Waiting for processes to exit. Feb 13 20:24:38.327983 systemd-logind[1420]: Removed session 89. Feb 13 20:24:38.419674 kubelet[2509]: E0213 20:24:38.419574 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:38.420465 kubelet[2509]: E0213 20:24:38.420393 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:24:42.539417 kubelet[2509]: E0213 20:24:42.539352 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:43.334992 systemd[1]: Started sshd@89-10.0.0.10:22-10.0.0.1:43734.service - OpenSSH per-connection server daemon (10.0.0.1:43734). Feb 13 20:24:43.371558 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 43734 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:43.372770 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:43.376004 systemd-logind[1420]: New session 90 of user core. Feb 13 20:24:43.381617 systemd[1]: Started session-90.scope - Session 90 of User core. Feb 13 20:24:43.482881 sshd[4141]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:43.485383 systemd[1]: sshd@89-10.0.0.10:22-10.0.0.1:43734.service: Deactivated successfully. Feb 13 20:24:43.486901 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 20:24:43.488114 systemd-logind[1420]: Session 90 logged out. Waiting for processes to exit. Feb 13 20:24:43.489209 systemd-logind[1420]: Removed session 90. Feb 13 20:24:47.540598 kubelet[2509]: E0213 20:24:47.540555 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:48.497787 systemd[1]: Started sshd@90-10.0.0.10:22-10.0.0.1:43748.service - OpenSSH per-connection server daemon (10.0.0.1:43748). Feb 13 20:24:48.534684 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 43748 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:48.535896 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:48.539378 systemd-logind[1420]: New session 91 of user core. Feb 13 20:24:48.547595 systemd[1]: Started session-91.scope - Session 91 of User core. Feb 13 20:24:48.651298 sshd[4157]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:48.654512 systemd[1]: sshd@90-10.0.0.10:22-10.0.0.1:43748.service: Deactivated successfully. Feb 13 20:24:48.656149 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 20:24:48.656753 systemd-logind[1420]: Session 91 logged out. Waiting for processes to exit. Feb 13 20:24:48.657535 systemd-logind[1420]: Removed session 91. Feb 13 20:24:49.418479 kubelet[2509]: E0213 20:24:49.418278 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:24:49.418993 kubelet[2509]: E0213 20:24:49.418872 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:24:52.541526 kubelet[2509]: E0213 20:24:52.541491 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:53.662805 systemd[1]: Started sshd@91-10.0.0.10:22-10.0.0.1:44490.service - OpenSSH per-connection server daemon (10.0.0.1:44490). Feb 13 20:24:53.699489 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 44490 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:53.700683 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:53.704200 systemd-logind[1420]: New session 92 of user core. Feb 13 20:24:53.713565 systemd[1]: Started session-92.scope - Session 92 of User core. Feb 13 20:24:53.816027 sshd[4171]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:53.819215 systemd[1]: sshd@91-10.0.0.10:22-10.0.0.1:44490.service: Deactivated successfully. Feb 13 20:24:53.820836 systemd[1]: session-92.scope: Deactivated successfully. Feb 13 20:24:53.821402 systemd-logind[1420]: Session 92 logged out. Waiting for processes to exit. Feb 13 20:24:53.822144 systemd-logind[1420]: Removed session 92. Feb 13 20:24:57.543099 kubelet[2509]: E0213 20:24:57.543029 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:24:58.829763 systemd[1]: Started sshd@92-10.0.0.10:22-10.0.0.1:44498.service - OpenSSH per-connection server daemon (10.0.0.1:44498). Feb 13 20:24:58.866777 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 44498 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:24:58.867964 sshd[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:24:58.871488 systemd-logind[1420]: New session 93 of user core. Feb 13 20:24:58.884644 systemd[1]: Started session-93.scope - Session 93 of User core. Feb 13 20:24:58.985474 sshd[4185]: pam_unix(sshd:session): session closed for user core Feb 13 20:24:58.988805 systemd[1]: sshd@92-10.0.0.10:22-10.0.0.1:44498.service: Deactivated successfully. Feb 13 20:24:58.991282 systemd[1]: session-93.scope: Deactivated successfully. Feb 13 20:24:58.992035 systemd-logind[1420]: Session 93 logged out. Waiting for processes to exit. Feb 13 20:24:58.993075 systemd-logind[1420]: Removed session 93. Feb 13 20:25:01.418594 kubelet[2509]: E0213 20:25:01.418560 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:02.544166 kubelet[2509]: E0213 20:25:02.544122 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:03.995911 systemd[1]: Started sshd@93-10.0.0.10:22-10.0.0.1:54142.service - OpenSSH per-connection server daemon (10.0.0.1:54142). Feb 13 20:25:04.032740 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 54142 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:04.033955 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:04.037919 systemd-logind[1420]: New session 94 of user core. Feb 13 20:25:04.046569 systemd[1]: Started session-94.scope - Session 94 of User core. Feb 13 20:25:04.148377 sshd[4200]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:04.151645 systemd[1]: sshd@93-10.0.0.10:22-10.0.0.1:54142.service: Deactivated successfully. Feb 13 20:25:04.153252 systemd[1]: session-94.scope: Deactivated successfully. Feb 13 20:25:04.153822 systemd-logind[1420]: Session 94 logged out. Waiting for processes to exit. Feb 13 20:25:04.154839 systemd-logind[1420]: Removed session 94. Feb 13 20:25:04.418644 kubelet[2509]: E0213 20:25:04.418349 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:04.419376 kubelet[2509]: E0213 20:25:04.419010 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:25:07.545766 kubelet[2509]: E0213 20:25:07.545708 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:09.158967 systemd[1]: Started sshd@94-10.0.0.10:22-10.0.0.1:54154.service - OpenSSH per-connection server daemon (10.0.0.1:54154). Feb 13 20:25:09.195756 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 54154 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:09.196900 sshd[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:09.200914 systemd-logind[1420]: New session 95 of user core. Feb 13 20:25:09.212558 systemd[1]: Started session-95.scope - Session 95 of User core. Feb 13 20:25:09.314011 sshd[4214]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:09.317566 systemd[1]: sshd@94-10.0.0.10:22-10.0.0.1:54154.service: Deactivated successfully. Feb 13 20:25:09.319269 systemd[1]: session-95.scope: Deactivated successfully. Feb 13 20:25:09.320887 systemd-logind[1420]: Session 95 logged out. Waiting for processes to exit. Feb 13 20:25:09.322147 systemd-logind[1420]: Removed session 95. Feb 13 20:25:12.546379 kubelet[2509]: E0213 20:25:12.546337 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:14.324160 systemd[1]: Started sshd@95-10.0.0.10:22-10.0.0.1:44534.service - OpenSSH per-connection server daemon (10.0.0.1:44534). Feb 13 20:25:14.361441 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 44534 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:14.362714 sshd[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:14.367103 systemd-logind[1420]: New session 96 of user core. Feb 13 20:25:14.376571 systemd[1]: Started session-96.scope - Session 96 of User core. Feb 13 20:25:14.478589 sshd[4232]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:14.481714 systemd[1]: sshd@95-10.0.0.10:22-10.0.0.1:44534.service: Deactivated successfully. Feb 13 20:25:14.483884 systemd[1]: session-96.scope: Deactivated successfully. Feb 13 20:25:14.484633 systemd-logind[1420]: Session 96 logged out. Waiting for processes to exit. Feb 13 20:25:14.485502 systemd-logind[1420]: Removed session 96. Feb 13 20:25:16.418648 kubelet[2509]: E0213 20:25:16.418405 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:16.419062 kubelet[2509]: E0213 20:25:16.419027 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:25:17.547874 kubelet[2509]: E0213 20:25:17.547833 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:19.418656 kubelet[2509]: E0213 20:25:19.418621 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:19.488763 systemd[1]: Started sshd@96-10.0.0.10:22-10.0.0.1:44538.service - OpenSSH per-connection server daemon (10.0.0.1:44538). Feb 13 20:25:19.525879 sshd[4248]: Accepted publickey for core from 10.0.0.1 port 44538 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:19.527043 sshd[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:19.530487 systemd-logind[1420]: New session 97 of user core. Feb 13 20:25:19.545557 systemd[1]: Started session-97.scope - Session 97 of User core. Feb 13 20:25:19.648605 sshd[4248]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:19.651878 systemd[1]: sshd@96-10.0.0.10:22-10.0.0.1:44538.service: Deactivated successfully. Feb 13 20:25:19.653593 systemd[1]: session-97.scope: Deactivated successfully. Feb 13 20:25:19.654167 systemd-logind[1420]: Session 97 logged out. Waiting for processes to exit. Feb 13 20:25:19.654961 systemd-logind[1420]: Removed session 97. Feb 13 20:25:22.549253 kubelet[2509]: E0213 20:25:22.549209 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:24.658856 systemd[1]: Started sshd@97-10.0.0.10:22-10.0.0.1:33878.service - OpenSSH per-connection server daemon (10.0.0.1:33878). Feb 13 20:25:24.695541 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 33878 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:24.696679 sshd[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:24.700372 systemd-logind[1420]: New session 98 of user core. Feb 13 20:25:24.706565 systemd[1]: Started session-98.scope - Session 98 of User core. Feb 13 20:25:24.808960 sshd[4262]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:24.811908 systemd[1]: sshd@97-10.0.0.10:22-10.0.0.1:33878.service: Deactivated successfully. Feb 13 20:25:24.813585 systemd[1]: session-98.scope: Deactivated successfully. Feb 13 20:25:24.814902 systemd-logind[1420]: Session 98 logged out. Waiting for processes to exit. Feb 13 20:25:24.816243 systemd-logind[1420]: Removed session 98. Feb 13 20:25:27.550026 kubelet[2509]: E0213 20:25:27.549989 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:29.817838 systemd[1]: Started sshd@98-10.0.0.10:22-10.0.0.1:33882.service - OpenSSH per-connection server daemon (10.0.0.1:33882). Feb 13 20:25:29.854565 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 33882 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:29.855700 sshd[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:29.859021 systemd-logind[1420]: New session 99 of user core. Feb 13 20:25:29.866598 systemd[1]: Started session-99.scope - Session 99 of User core. Feb 13 20:25:29.968314 sshd[4277]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:29.971733 systemd[1]: sshd@98-10.0.0.10:22-10.0.0.1:33882.service: Deactivated successfully. Feb 13 20:25:29.974443 systemd[1]: session-99.scope: Deactivated successfully. Feb 13 20:25:29.975301 systemd-logind[1420]: Session 99 logged out. Waiting for processes to exit. Feb 13 20:25:29.976161 systemd-logind[1420]: Removed session 99. Feb 13 20:25:30.418819 kubelet[2509]: E0213 20:25:30.418784 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:30.419761 kubelet[2509]: E0213 20:25:30.419342 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:25:32.550908 kubelet[2509]: E0213 20:25:32.550847 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:34.979138 systemd[1]: Started sshd@99-10.0.0.10:22-10.0.0.1:60106.service - OpenSSH per-connection server daemon (10.0.0.1:60106). Feb 13 20:25:35.015892 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 60106 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:35.017078 sshd[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:35.020581 systemd-logind[1420]: New session 100 of user core. Feb 13 20:25:35.032636 systemd[1]: Started session-100.scope - Session 100 of User core. Feb 13 20:25:35.137176 sshd[4293]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:35.140449 systemd[1]: sshd@99-10.0.0.10:22-10.0.0.1:60106.service: Deactivated successfully. Feb 13 20:25:35.142870 systemd[1]: session-100.scope: Deactivated successfully. Feb 13 20:25:35.143625 systemd-logind[1420]: Session 100 logged out. Waiting for processes to exit. Feb 13 20:25:35.144515 systemd-logind[1420]: Removed session 100. Feb 13 20:25:37.552161 kubelet[2509]: E0213 20:25:37.552114 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:40.147873 systemd[1]: Started sshd@100-10.0.0.10:22-10.0.0.1:60114.service - OpenSSH per-connection server daemon (10.0.0.1:60114). Feb 13 20:25:40.185095 sshd[4308]: Accepted publickey for core from 10.0.0.1 port 60114 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:40.186344 sshd[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:40.189804 systemd-logind[1420]: New session 101 of user core. Feb 13 20:25:40.199569 systemd[1]: Started session-101.scope - Session 101 of User core. Feb 13 20:25:40.300492 sshd[4308]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:40.304470 systemd[1]: sshd@100-10.0.0.10:22-10.0.0.1:60114.service: Deactivated successfully. Feb 13 20:25:40.307127 systemd[1]: session-101.scope: Deactivated successfully. Feb 13 20:25:40.307908 systemd-logind[1420]: Session 101 logged out. Waiting for processes to exit. Feb 13 20:25:40.308771 systemd-logind[1420]: Removed session 101. Feb 13 20:25:42.419445 kubelet[2509]: E0213 20:25:42.419389 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:42.420029 kubelet[2509]: E0213 20:25:42.419979 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:25:42.552723 kubelet[2509]: E0213 20:25:42.552690 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:45.310765 systemd[1]: Started sshd@101-10.0.0.10:22-10.0.0.1:41616.service - OpenSSH per-connection server daemon (10.0.0.1:41616). Feb 13 20:25:45.347889 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 41616 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:45.349553 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:45.352926 systemd-logind[1420]: New session 102 of user core. Feb 13 20:25:45.363562 systemd[1]: Started session-102.scope - Session 102 of User core. Feb 13 20:25:45.465407 sshd[4322]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:45.468364 systemd[1]: sshd@101-10.0.0.10:22-10.0.0.1:41616.service: Deactivated successfully. Feb 13 20:25:45.471086 systemd[1]: session-102.scope: Deactivated successfully. Feb 13 20:25:45.472092 systemd-logind[1420]: Session 102 logged out. Waiting for processes to exit. Feb 13 20:25:45.473021 systemd-logind[1420]: Removed session 102. Feb 13 20:25:47.554084 kubelet[2509]: E0213 20:25:47.554017 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:49.418616 kubelet[2509]: E0213 20:25:49.418578 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:50.476120 systemd[1]: Started sshd@102-10.0.0.10:22-10.0.0.1:41626.service - OpenSSH per-connection server daemon (10.0.0.1:41626). Feb 13 20:25:50.512392 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 41626 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:50.513581 sshd[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:50.517035 systemd-logind[1420]: New session 103 of user core. Feb 13 20:25:50.533569 systemd[1]: Started session-103.scope - Session 103 of User core. Feb 13 20:25:50.633200 sshd[4338]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:50.636209 systemd[1]: sshd@102-10.0.0.10:22-10.0.0.1:41626.service: Deactivated successfully. Feb 13 20:25:50.638982 systemd[1]: session-103.scope: Deactivated successfully. Feb 13 20:25:50.640230 systemd-logind[1420]: Session 103 logged out. Waiting for processes to exit. Feb 13 20:25:50.641526 systemd-logind[1420]: Removed session 103. Feb 13 20:25:52.555526 kubelet[2509]: E0213 20:25:52.555471 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:25:55.418715 kubelet[2509]: E0213 20:25:55.418676 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:55.644389 systemd[1]: Started sshd@103-10.0.0.10:22-10.0.0.1:40150.service - OpenSSH per-connection server daemon (10.0.0.1:40150). Feb 13 20:25:55.681876 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 40150 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:25:55.683073 sshd[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:25:55.686944 systemd-logind[1420]: New session 104 of user core. Feb 13 20:25:55.695649 systemd[1]: Started session-104.scope - Session 104 of User core. Feb 13 20:25:55.796418 sshd[4352]: pam_unix(sshd:session): session closed for user core Feb 13 20:25:55.799417 systemd[1]: sshd@103-10.0.0.10:22-10.0.0.1:40150.service: Deactivated successfully. Feb 13 20:25:55.801112 systemd[1]: session-104.scope: Deactivated successfully. Feb 13 20:25:55.801762 systemd-logind[1420]: Session 104 logged out. Waiting for processes to exit. Feb 13 20:25:55.802523 systemd-logind[1420]: Removed session 104. Feb 13 20:25:57.418165 kubelet[2509]: E0213 20:25:57.418084 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:25:57.418670 kubelet[2509]: E0213 20:25:57.418644 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:25:57.556115 kubelet[2509]: E0213 20:25:57.556083 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:00.806914 systemd[1]: Started sshd@104-10.0.0.10:22-10.0.0.1:40158.service - OpenSSH per-connection server daemon (10.0.0.1:40158). Feb 13 20:26:00.843529 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 40158 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:00.844649 sshd[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:00.847812 systemd-logind[1420]: New session 105 of user core. Feb 13 20:26:00.854559 systemd[1]: Started session-105.scope - Session 105 of User core. Feb 13 20:26:00.955622 sshd[4367]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:00.958837 systemd-logind[1420]: Session 105 logged out. Waiting for processes to exit. Feb 13 20:26:00.959102 systemd[1]: sshd@104-10.0.0.10:22-10.0.0.1:40158.service: Deactivated successfully. Feb 13 20:26:00.960659 systemd[1]: session-105.scope: Deactivated successfully. Feb 13 20:26:00.961404 systemd-logind[1420]: Removed session 105. Feb 13 20:26:02.556809 kubelet[2509]: E0213 20:26:02.556770 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:05.969999 systemd[1]: Started sshd@105-10.0.0.10:22-10.0.0.1:46074.service - OpenSSH per-connection server daemon (10.0.0.1:46074). Feb 13 20:26:06.006626 sshd[4381]: Accepted publickey for core from 10.0.0.1 port 46074 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:06.007778 sshd[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:06.011668 systemd-logind[1420]: New session 106 of user core. Feb 13 20:26:06.018589 systemd[1]: Started session-106.scope - Session 106 of User core. Feb 13 20:26:06.119443 sshd[4381]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:06.122534 systemd[1]: sshd@105-10.0.0.10:22-10.0.0.1:46074.service: Deactivated successfully. Feb 13 20:26:06.124926 systemd[1]: session-106.scope: Deactivated successfully. Feb 13 20:26:06.125777 systemd-logind[1420]: Session 106 logged out. Waiting for processes to exit. Feb 13 20:26:06.126552 systemd-logind[1420]: Removed session 106. Feb 13 20:26:07.558128 kubelet[2509]: E0213 20:26:07.558087 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:11.129833 systemd[1]: Started sshd@106-10.0.0.10:22-10.0.0.1:46082.service - OpenSSH per-connection server daemon (10.0.0.1:46082). Feb 13 20:26:11.166411 sshd[4396]: Accepted publickey for core from 10.0.0.1 port 46082 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:11.167725 sshd[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:11.171780 systemd-logind[1420]: New session 107 of user core. Feb 13 20:26:11.186575 systemd[1]: Started session-107.scope - Session 107 of User core. Feb 13 20:26:11.289079 sshd[4396]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:11.292914 systemd[1]: sshd@106-10.0.0.10:22-10.0.0.1:46082.service: Deactivated successfully. Feb 13 20:26:11.294411 systemd[1]: session-107.scope: Deactivated successfully. Feb 13 20:26:11.295718 systemd-logind[1420]: Session 107 logged out. Waiting for processes to exit. Feb 13 20:26:11.296703 systemd-logind[1420]: Removed session 107. Feb 13 20:26:12.418388 kubelet[2509]: E0213 20:26:12.418163 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:26:12.419129 kubelet[2509]: E0213 20:26:12.418947 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:26:12.558733 kubelet[2509]: E0213 20:26:12.558688 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:16.299777 systemd[1]: Started sshd@107-10.0.0.10:22-10.0.0.1:40898.service - OpenSSH per-connection server daemon (10.0.0.1:40898). Feb 13 20:26:16.337226 sshd[4410]: Accepted publickey for core from 10.0.0.1 port 40898 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:16.338412 sshd[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:16.341809 systemd-logind[1420]: New session 108 of user core. Feb 13 20:26:16.348649 systemd[1]: Started session-108.scope - Session 108 of User core. Feb 13 20:26:16.451680 sshd[4410]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:16.454901 systemd[1]: sshd@107-10.0.0.10:22-10.0.0.1:40898.service: Deactivated successfully. Feb 13 20:26:16.457451 systemd[1]: session-108.scope: Deactivated successfully. Feb 13 20:26:16.458208 systemd-logind[1420]: Session 108 logged out. Waiting for processes to exit. Feb 13 20:26:16.459064 systemd-logind[1420]: Removed session 108. Feb 13 20:26:17.559506 kubelet[2509]: E0213 20:26:17.559447 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:21.461932 systemd[1]: Started sshd@108-10.0.0.10:22-10.0.0.1:40906.service - OpenSSH per-connection server daemon (10.0.0.1:40906). Feb 13 20:26:21.500136 sshd[4426]: Accepted publickey for core from 10.0.0.1 port 40906 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:21.501313 sshd[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:21.504600 systemd-logind[1420]: New session 109 of user core. Feb 13 20:26:21.524641 systemd[1]: Started session-109.scope - Session 109 of User core. Feb 13 20:26:21.627997 sshd[4426]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:21.630964 systemd[1]: sshd@108-10.0.0.10:22-10.0.0.1:40906.service: Deactivated successfully. Feb 13 20:26:21.632677 systemd[1]: session-109.scope: Deactivated successfully. Feb 13 20:26:21.633991 systemd-logind[1420]: Session 109 logged out. Waiting for processes to exit. Feb 13 20:26:21.634954 systemd-logind[1420]: Removed session 109. Feb 13 20:26:22.419097 kubelet[2509]: E0213 20:26:22.419034 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:26:22.560624 kubelet[2509]: E0213 20:26:22.560589 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:26.418765 kubelet[2509]: E0213 20:26:26.418726 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:26:26.419400 kubelet[2509]: E0213 20:26:26.419353 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:26:26.639439 systemd[1]: Started sshd@109-10.0.0.10:22-10.0.0.1:38572.service - OpenSSH per-connection server daemon (10.0.0.1:38572). Feb 13 20:26:26.676585 sshd[4441]: Accepted publickey for core from 10.0.0.1 port 38572 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:26.677673 sshd[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:26.681772 systemd-logind[1420]: New session 110 of user core. Feb 13 20:26:26.704583 systemd[1]: Started session-110.scope - Session 110 of User core. Feb 13 20:26:26.805835 sshd[4441]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:26.808489 systemd[1]: sshd@109-10.0.0.10:22-10.0.0.1:38572.service: Deactivated successfully. Feb 13 20:26:26.810615 systemd[1]: session-110.scope: Deactivated successfully. Feb 13 20:26:26.811944 systemd-logind[1420]: Session 110 logged out. Waiting for processes to exit. Feb 13 20:26:26.812975 systemd-logind[1420]: Removed session 110. Feb 13 20:26:27.561488 kubelet[2509]: E0213 20:26:27.561453 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:31.817029 systemd[1]: Started sshd@110-10.0.0.10:22-10.0.0.1:38576.service - OpenSSH per-connection server daemon (10.0.0.1:38576). Feb 13 20:26:31.853755 sshd[4455]: Accepted publickey for core from 10.0.0.1 port 38576 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:31.854846 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:31.858481 systemd-logind[1420]: New session 111 of user core. Feb 13 20:26:31.869617 systemd[1]: Started session-111.scope - Session 111 of User core. Feb 13 20:26:31.971968 sshd[4455]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:31.974910 systemd[1]: sshd@110-10.0.0.10:22-10.0.0.1:38576.service: Deactivated successfully. Feb 13 20:26:31.977233 systemd[1]: session-111.scope: Deactivated successfully. Feb 13 20:26:31.977820 systemd-logind[1420]: Session 111 logged out. Waiting for processes to exit. Feb 13 20:26:31.978576 systemd-logind[1420]: Removed session 111. Feb 13 20:26:32.562914 kubelet[2509]: E0213 20:26:32.562859 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:36.981770 systemd[1]: Started sshd@111-10.0.0.10:22-10.0.0.1:32774.service - OpenSSH per-connection server daemon (10.0.0.1:32774). Feb 13 20:26:37.019656 sshd[4471]: Accepted publickey for core from 10.0.0.1 port 32774 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:37.020964 sshd[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:37.024639 systemd-logind[1420]: New session 112 of user core. Feb 13 20:26:37.035643 systemd[1]: Started session-112.scope - Session 112 of User core. Feb 13 20:26:37.137839 sshd[4471]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:37.140318 systemd[1]: sshd@111-10.0.0.10:22-10.0.0.1:32774.service: Deactivated successfully. Feb 13 20:26:37.141854 systemd[1]: session-112.scope: Deactivated successfully. Feb 13 20:26:37.143011 systemd-logind[1420]: Session 112 logged out. Waiting for processes to exit. Feb 13 20:26:37.143771 systemd-logind[1420]: Removed session 112. Feb 13 20:26:37.564210 kubelet[2509]: E0213 20:26:37.564109 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:40.418682 kubelet[2509]: E0213 20:26:40.418646 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:26:40.419262 kubelet[2509]: E0213 20:26:40.419221 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:26:42.147807 systemd[1]: Started sshd@112-10.0.0.10:22-10.0.0.1:32776.service - OpenSSH per-connection server daemon (10.0.0.1:32776). Feb 13 20:26:42.184785 sshd[4485]: Accepted publickey for core from 10.0.0.1 port 32776 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:42.185940 sshd[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:42.189969 systemd-logind[1420]: New session 113 of user core. Feb 13 20:26:42.199653 systemd[1]: Started session-113.scope - Session 113 of User core. Feb 13 20:26:42.302127 sshd[4485]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:42.305109 systemd[1]: sshd@112-10.0.0.10:22-10.0.0.1:32776.service: Deactivated successfully. Feb 13 20:26:42.306673 systemd[1]: session-113.scope: Deactivated successfully. Feb 13 20:26:42.307210 systemd-logind[1420]: Session 113 logged out. Waiting for processes to exit. Feb 13 20:26:42.307972 systemd-logind[1420]: Removed session 113. Feb 13 20:26:42.565782 kubelet[2509]: E0213 20:26:42.565722 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:47.316731 systemd[1]: Started sshd@113-10.0.0.10:22-10.0.0.1:46056.service - OpenSSH per-connection server daemon (10.0.0.1:46056). Feb 13 20:26:47.353307 sshd[4500]: Accepted publickey for core from 10.0.0.1 port 46056 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:47.354513 sshd[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:47.357943 systemd-logind[1420]: New session 114 of user core. Feb 13 20:26:47.364651 systemd[1]: Started session-114.scope - Session 114 of User core. Feb 13 20:26:47.466689 sshd[4500]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:47.469698 systemd[1]: sshd@113-10.0.0.10:22-10.0.0.1:46056.service: Deactivated successfully. Feb 13 20:26:47.471186 systemd[1]: session-114.scope: Deactivated successfully. Feb 13 20:26:47.472608 systemd-logind[1420]: Session 114 logged out. Waiting for processes to exit. Feb 13 20:26:47.473506 systemd-logind[1420]: Removed session 114. Feb 13 20:26:47.567281 kubelet[2509]: E0213 20:26:47.567175 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:49.418739 kubelet[2509]: E0213 20:26:49.418650 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:26:51.418496 kubelet[2509]: E0213 20:26:51.418466 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:26:51.419215 kubelet[2509]: E0213 20:26:51.419172 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca" Feb 13 20:26:52.476994 systemd[1]: Started sshd@114-10.0.0.10:22-10.0.0.1:38410.service - OpenSSH per-connection server daemon (10.0.0.1:38410). Feb 13 20:26:52.514782 sshd[4517]: Accepted publickey for core from 10.0.0.1 port 38410 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:52.515940 sshd[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:52.519362 systemd-logind[1420]: New session 115 of user core. Feb 13 20:26:52.529573 systemd[1]: Started session-115.scope - Session 115 of User core. Feb 13 20:26:52.568695 kubelet[2509]: E0213 20:26:52.568654 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:52.629920 sshd[4517]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:52.633015 systemd[1]: sshd@114-10.0.0.10:22-10.0.0.1:38410.service: Deactivated successfully. Feb 13 20:26:52.635138 systemd[1]: session-115.scope: Deactivated successfully. Feb 13 20:26:52.635944 systemd-logind[1420]: Session 115 logged out. Waiting for processes to exit. Feb 13 20:26:52.637632 systemd-logind[1420]: Removed session 115. Feb 13 20:26:57.570040 kubelet[2509]: E0213 20:26:57.569998 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:26:57.640067 systemd[1]: Started sshd@115-10.0.0.10:22-10.0.0.1:38414.service - OpenSSH per-connection server daemon (10.0.0.1:38414). Feb 13 20:26:57.677195 sshd[4531]: Accepted publickey for core from 10.0.0.1 port 38414 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:26:57.678309 sshd[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:26:57.682862 systemd-logind[1420]: New session 116 of user core. Feb 13 20:26:57.693561 systemd[1]: Started session-116.scope - Session 116 of User core. Feb 13 20:26:57.795305 sshd[4531]: pam_unix(sshd:session): session closed for user core Feb 13 20:26:57.798267 systemd[1]: sshd@115-10.0.0.10:22-10.0.0.1:38414.service: Deactivated successfully. Feb 13 20:26:57.799908 systemd[1]: session-116.scope: Deactivated successfully. Feb 13 20:26:57.800385 systemd-logind[1420]: Session 116 logged out. Waiting for processes to exit. Feb 13 20:26:57.801163 systemd-logind[1420]: Removed session 116. Feb 13 20:27:02.571116 kubelet[2509]: E0213 20:27:02.571021 2509 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:27:02.805745 systemd[1]: Started sshd@116-10.0.0.10:22-10.0.0.1:38146.service - OpenSSH per-connection server daemon (10.0.0.1:38146). Feb 13 20:27:02.842268 sshd[4547]: Accepted publickey for core from 10.0.0.1 port 38146 ssh2: RSA SHA256:LaFXDQ5kJ4LcD/Es1CLb+6ve3k3017804ZdZMiXaUQg Feb 13 20:27:02.843249 sshd[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:27:02.846727 systemd-logind[1420]: New session 117 of user core. Feb 13 20:27:02.858556 systemd[1]: Started session-117.scope - Session 117 of User core. Feb 13 20:27:02.957737 sshd[4547]: pam_unix(sshd:session): session closed for user core Feb 13 20:27:02.960758 systemd[1]: sshd@116-10.0.0.10:22-10.0.0.1:38146.service: Deactivated successfully. Feb 13 20:27:02.963097 systemd[1]: session-117.scope: Deactivated successfully. Feb 13 20:27:02.963957 systemd-logind[1420]: Session 117 logged out. Waiting for processes to exit. Feb 13 20:27:02.964797 systemd-logind[1420]: Removed session 117. Feb 13 20:27:03.418274 kubelet[2509]: E0213 20:27:03.418251 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:27:03.418909 kubelet[2509]: E0213 20:27:03.418725 2509 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-hvbks" podUID="db3e2d2a-fd5b-4cce-b6ee-04e217b474ca"