Feb 13 19:56:36.908040 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:56:36.908063 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 19:56:36.908074 kernel: KASLR enabled Feb 13 19:56:36.908079 kernel: efi: EFI v2.7 by EDK II Feb 13 19:56:36.908086 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 19:56:36.908091 kernel: random: crng init done Feb 13 19:56:36.908098 kernel: ACPI: Early table checksum verification disabled Feb 13 19:56:36.908104 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 19:56:36.908111 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:56:36.908118 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:56:36.908124 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:56:36.908130 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:56:36.908141 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:56:36.908147 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:56:36.908155 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:56:36.908163 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:56:36.908169 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:56:36.908176 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:56:36.908182 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:56:36.908189 kernel: NUMA: Failed to initialise from firmware Feb 13 19:56:36.908195 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:56:36.908202 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 19:56:36.908208 kernel: Zone ranges: Feb 13 19:56:36.908214 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:56:36.908221 kernel: DMA32 empty Feb 13 19:56:36.908228 kernel: Normal empty Feb 13 19:56:36.908235 kernel: Movable zone start for each node Feb 13 19:56:36.908241 kernel: Early memory node ranges Feb 13 19:56:36.908247 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 19:56:36.908254 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:56:36.908260 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:56:36.908266 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:56:36.908273 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:56:36.908279 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:56:36.908286 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:56:36.908292 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:56:36.908299 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:56:36.908306 kernel: psci: probing for conduit method from ACPI. Feb 13 19:56:36.908312 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:56:36.908319 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:56:36.908328 kernel: psci: Trusted OS migration not required Feb 13 19:56:36.908335 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:56:36.908342 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:56:36.908350 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:56:36.908357 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:56:36.908364 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:56:36.908371 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:56:36.908383 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:56:36.908390 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:56:36.908397 kernel: CPU features: detected: Spectre-v4 Feb 13 19:56:36.908404 kernel: CPU features: detected: Spectre-BHB Feb 13 19:56:36.908411 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:56:36.908418 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:56:36.908426 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:56:36.908433 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:56:36.908440 kernel: alternatives: applying boot alternatives Feb 13 19:56:36.908448 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:56:36.908455 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:56:36.908462 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:56:36.908469 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:56:36.908475 kernel: Fallback order for Node 0: 0 Feb 13 19:56:36.908482 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:56:36.908489 kernel: Policy zone: DMA Feb 13 19:56:36.908496 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:56:36.908504 kernel: software IO TLB: area num 4. Feb 13 19:56:36.908510 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:56:36.908518 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Feb 13 19:56:36.908524 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:56:36.908531 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:56:36.908538 kernel: rcu: RCU event tracing is enabled. Feb 13 19:56:36.908545 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:56:36.908552 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:56:36.908559 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:56:36.908566 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:56:36.908573 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:56:36.908579 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:56:36.908587 kernel: GICv3: 256 SPIs implemented Feb 13 19:56:36.908594 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:56:36.908601 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:56:36.908607 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:56:36.908614 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:56:36.908621 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:56:36.908628 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:56:36.908635 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:56:36.908642 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:56:36.908648 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:56:36.908655 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:56:36.908664 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:56:36.908670 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:56:36.908677 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:56:36.908684 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:56:36.908691 kernel: arm-pv: using stolen time PV Feb 13 19:56:36.908698 kernel: Console: colour dummy device 80x25 Feb 13 19:56:36.908705 kernel: ACPI: Core revision 20230628 Feb 13 19:56:36.908712 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:56:36.908719 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:56:36.908726 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:56:36.908734 kernel: landlock: Up and running. Feb 13 19:56:36.908741 kernel: SELinux: Initializing. Feb 13 19:56:36.908748 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:56:36.908755 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:56:36.908762 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:56:36.908769 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:56:36.908776 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:56:36.908783 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:56:36.908790 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:56:36.908798 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:56:36.908805 kernel: Remapping and enabling EFI services. Feb 13 19:56:36.908812 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:56:36.908819 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:56:36.908826 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:56:36.908833 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:56:36.908840 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:56:36.908847 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:56:36.908854 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:56:36.908861 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:56:36.908869 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:56:36.908876 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:56:36.908888 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:56:36.908896 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:56:36.908903 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:56:36.908911 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:56:36.908918 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:56:36.908925 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:56:36.908933 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:56:36.908942 kernel: SMP: Total of 4 processors activated. Feb 13 19:56:36.908949 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:56:36.908957 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:56:36.908964 kernel: CPU features: detected: Common not Private translations Feb 13 19:56:36.908971 kernel: CPU features: detected: CRC32 instructions Feb 13 19:56:36.908978 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:56:36.908986 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:56:36.908993 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:56:36.909001 kernel: CPU features: detected: Privileged Access Never Feb 13 19:56:36.909009 kernel: CPU features: detected: RAS Extension Support Feb 13 19:56:36.909075 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:56:36.909086 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:56:36.909093 kernel: alternatives: applying system-wide alternatives Feb 13 19:56:36.909100 kernel: devtmpfs: initialized Feb 13 19:56:36.909108 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:56:36.909115 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:56:36.909123 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:56:36.909133 kernel: SMBIOS 3.0.0 present. Feb 13 19:56:36.909140 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 19:56:36.909148 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:56:36.909155 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:56:36.909162 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:56:36.909170 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:56:36.909177 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:56:36.909184 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Feb 13 19:56:36.909191 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:56:36.909200 kernel: cpuidle: using governor menu Feb 13 19:56:36.909207 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:56:36.909215 kernel: ASID allocator initialised with 32768 entries Feb 13 19:56:36.909222 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:56:36.909229 kernel: Serial: AMBA PL011 UART driver Feb 13 19:56:36.909237 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:56:36.909244 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:56:36.909251 kernel: Modules: 509040 pages in range for PLT usage Feb 13 19:56:36.909259 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:56:36.909268 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:56:36.909275 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:56:36.909282 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:56:36.909289 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:56:36.909297 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:56:36.909304 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:56:36.909311 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:56:36.909318 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:56:36.909326 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:56:36.909334 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:56:36.909341 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:56:36.909349 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:56:36.909356 kernel: ACPI: Interpreter enabled Feb 13 19:56:36.909364 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:56:36.909371 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:56:36.909384 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:56:36.909393 kernel: printk: console [ttyAMA0] enabled Feb 13 19:56:36.909400 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:56:36.909547 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:56:36.909624 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:56:36.909690 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:56:36.909754 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:56:36.909819 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:56:36.909829 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:56:36.909836 kernel: PCI host bridge to bus 0000:00 Feb 13 19:56:36.909908 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:56:36.909969 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:56:36.910041 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:56:36.910102 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:56:36.910181 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:56:36.910259 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:56:36.910327 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:56:36.910407 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:56:36.910475 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:56:36.910541 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:56:36.910607 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:56:36.910673 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:56:36.910732 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:56:36.910807 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:56:36.910873 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:56:36.910883 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:56:36.910890 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:56:36.910898 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:56:36.910906 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:56:36.910913 kernel: iommu: Default domain type: Translated Feb 13 19:56:36.910920 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:56:36.910928 kernel: efivars: Registered efivars operations Feb 13 19:56:36.910937 kernel: vgaarb: loaded Feb 13 19:56:36.910944 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:56:36.910951 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:56:36.910959 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:56:36.910966 kernel: pnp: PnP ACPI init Feb 13 19:56:36.911082 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:56:36.911094 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:56:36.911101 kernel: NET: Registered PF_INET protocol family Feb 13 19:56:36.911112 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:56:36.911119 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:56:36.911127 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:56:36.911134 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:56:36.911142 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:56:36.911149 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:56:36.911156 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:56:36.911164 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:56:36.911171 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:56:36.911180 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:56:36.911187 kernel: kvm [1]: HYP mode not available Feb 13 19:56:36.911195 kernel: Initialise system trusted keyrings Feb 13 19:56:36.911202 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:56:36.911209 kernel: Key type asymmetric registered Feb 13 19:56:36.911217 kernel: Asymmetric key parser 'x509' registered Feb 13 19:56:36.911224 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:56:36.911231 kernel: io scheduler mq-deadline registered Feb 13 19:56:36.911238 kernel: io scheduler kyber registered Feb 13 19:56:36.911247 kernel: io scheduler bfq registered Feb 13 19:56:36.911254 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:56:36.911262 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:56:36.911269 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:56:36.911338 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:56:36.911348 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:56:36.911356 kernel: thunder_xcv, ver 1.0 Feb 13 19:56:36.911363 kernel: thunder_bgx, ver 1.0 Feb 13 19:56:36.911370 kernel: nicpf, ver 1.0 Feb 13 19:56:36.911388 kernel: nicvf, ver 1.0 Feb 13 19:56:36.911466 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:56:36.911528 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:56:36 UTC (1739476596) Feb 13 19:56:36.911538 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:56:36.911545 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:56:36.911553 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:56:36.911560 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:56:36.911567 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:56:36.911577 kernel: Segment Routing with IPv6 Feb 13 19:56:36.911584 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:56:36.911591 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:56:36.911599 kernel: Key type dns_resolver registered Feb 13 19:56:36.911606 kernel: registered taskstats version 1 Feb 13 19:56:36.911614 kernel: Loading compiled-in X.509 certificates Feb 13 19:56:36.911621 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 19:56:36.911628 kernel: Key type .fscrypt registered Feb 13 19:56:36.911635 kernel: Key type fscrypt-provisioning registered Feb 13 19:56:36.911650 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:56:36.911657 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:56:36.911664 kernel: ima: No architecture policies found Feb 13 19:56:36.911672 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:56:36.911679 kernel: clk: Disabling unused clocks Feb 13 19:56:36.911686 kernel: Freeing unused kernel memory: 39360K Feb 13 19:56:36.911693 kernel: Run /init as init process Feb 13 19:56:36.911701 kernel: with arguments: Feb 13 19:56:36.911708 kernel: /init Feb 13 19:56:36.911716 kernel: with environment: Feb 13 19:56:36.911724 kernel: HOME=/ Feb 13 19:56:36.911731 kernel: TERM=linux Feb 13 19:56:36.911738 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:56:36.911747 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:56:36.911757 systemd[1]: Detected virtualization kvm. Feb 13 19:56:36.911765 systemd[1]: Detected architecture arm64. Feb 13 19:56:36.911773 systemd[1]: Running in initrd. Feb 13 19:56:36.911782 systemd[1]: No hostname configured, using default hostname. Feb 13 19:56:36.911789 systemd[1]: Hostname set to . Feb 13 19:56:36.911797 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:56:36.911805 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:56:36.911813 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:56:36.911821 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:56:36.911830 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:56:36.911837 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:56:36.911847 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:56:36.911855 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:56:36.911864 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:56:36.911872 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:56:36.911880 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:56:36.911889 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:56:36.911896 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:56:36.911905 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:56:36.911913 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:56:36.911921 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:56:36.911929 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:56:36.911937 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:56:36.911944 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:56:36.911952 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:56:36.911961 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:56:36.911970 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:56:36.911978 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:56:36.911985 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:56:36.911993 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:56:36.912001 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:56:36.912009 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:56:36.912026 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:56:36.912035 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:56:36.912043 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:56:36.912054 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:56:36.912061 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:56:36.912069 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:56:36.912077 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:56:36.912086 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:56:36.912113 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 19:56:36.912132 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:56:36.912140 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:56:36.912151 systemd-journald[238]: Journal started Feb 13 19:56:36.912180 systemd-journald[238]: Runtime Journal (/run/log/journal/a4422b575b9e4561b207b0b1fc458b2b) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:56:36.903172 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 19:56:36.916145 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:56:36.916174 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:56:36.918034 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:56:36.922526 kernel: Bridge firewalling registered Feb 13 19:56:36.922030 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 19:56:36.922234 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:56:36.924902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:56:36.926626 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:56:36.932073 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:56:36.933419 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:56:36.937352 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:56:36.940069 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:56:36.942239 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:56:36.943264 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:56:36.948473 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:56:36.957069 dracut-cmdline[274]: dracut-dracut-053 Feb 13 19:56:36.959586 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:56:36.977103 systemd-resolved[277]: Positive Trust Anchors: Feb 13 19:56:36.977115 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:56:36.977147 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:56:36.981918 systemd-resolved[277]: Defaulting to hostname 'linux'. Feb 13 19:56:36.982883 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:56:36.986592 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:56:37.025050 kernel: SCSI subsystem initialized Feb 13 19:56:37.030035 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:56:37.037044 kernel: iscsi: registered transport (tcp) Feb 13 19:56:37.050121 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:56:37.050168 kernel: QLogic iSCSI HBA Driver Feb 13 19:56:37.093375 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:56:37.102181 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:56:37.117445 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:56:37.118076 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:56:37.118093 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:56:37.168056 kernel: raid6: neonx8 gen() 15648 MB/s Feb 13 19:56:37.185042 kernel: raid6: neonx4 gen() 15377 MB/s Feb 13 19:56:37.202038 kernel: raid6: neonx2 gen() 13095 MB/s Feb 13 19:56:37.219040 kernel: raid6: neonx1 gen() 10415 MB/s Feb 13 19:56:37.236042 kernel: raid6: int64x8 gen() 6906 MB/s Feb 13 19:56:37.253038 kernel: raid6: int64x4 gen() 7299 MB/s Feb 13 19:56:37.270037 kernel: raid6: int64x2 gen() 6109 MB/s Feb 13 19:56:37.287167 kernel: raid6: int64x1 gen() 4961 MB/s Feb 13 19:56:37.287183 kernel: raid6: using algorithm neonx8 gen() 15648 MB/s Feb 13 19:56:37.305127 kernel: raid6: .... xor() 11854 MB/s, rmw enabled Feb 13 19:56:37.305156 kernel: raid6: using neon recovery algorithm Feb 13 19:56:37.310043 kernel: xor: measuring software checksum speed Feb 13 19:56:37.311235 kernel: 8regs : 17289 MB/sec Feb 13 19:56:37.311248 kernel: 32regs : 19422 MB/sec Feb 13 19:56:37.312498 kernel: arm64_neon : 26005 MB/sec Feb 13 19:56:37.312510 kernel: xor: using function: arm64_neon (26005 MB/sec) Feb 13 19:56:37.364050 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:56:37.375418 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:56:37.386209 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:56:37.397808 systemd-udevd[461]: Using default interface naming scheme 'v255'. Feb 13 19:56:37.400999 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:56:37.404569 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:56:37.419457 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Feb 13 19:56:37.449076 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:56:37.460203 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:56:37.503219 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:56:37.512201 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:56:37.522288 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:56:37.524788 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:56:37.526582 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:56:37.528967 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:56:37.537189 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:56:37.548146 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:56:37.562061 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:56:37.570482 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:56:37.570587 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:56:37.570599 kernel: GPT:9289727 != 19775487 Feb 13 19:56:37.570608 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:56:37.570617 kernel: GPT:9289727 != 19775487 Feb 13 19:56:37.570626 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:56:37.570643 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:56:37.571482 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:56:37.571644 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:56:37.575001 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:56:37.577833 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:56:37.577972 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:56:37.580048 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:56:37.591841 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (509) Feb 13 19:56:37.591884 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (525) Feb 13 19:56:37.594252 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:56:37.607132 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:56:37.609427 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:56:37.614306 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:56:37.621553 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:56:37.625475 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:56:37.626669 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:56:37.641174 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:56:37.642968 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:56:37.648480 disk-uuid[551]: Primary Header is updated. Feb 13 19:56:37.648480 disk-uuid[551]: Secondary Entries is updated. Feb 13 19:56:37.648480 disk-uuid[551]: Secondary Header is updated. Feb 13 19:56:37.652050 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:56:37.662562 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:56:38.663264 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:56:38.664411 disk-uuid[553]: The operation has completed successfully. Feb 13 19:56:38.687821 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:56:38.687915 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:56:38.703235 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:56:38.705905 sh[576]: Success Feb 13 19:56:38.719075 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:56:38.748958 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:56:38.763288 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:56:38.765598 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:56:38.774742 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 19:56:38.774778 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:56:38.774789 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:56:38.776599 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:56:38.776615 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:56:38.780353 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:56:38.781822 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:56:38.786157 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:56:38.787670 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:56:38.796424 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:56:38.796463 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:56:38.796474 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:56:38.800335 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:56:38.807600 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:56:38.809731 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:56:38.814573 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:56:38.824192 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:56:38.888630 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:56:38.900208 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:56:38.924130 systemd-networkd[767]: lo: Link UP Feb 13 19:56:38.924141 systemd-networkd[767]: lo: Gained carrier Feb 13 19:56:38.924826 systemd-networkd[767]: Enumeration completed Feb 13 19:56:38.926892 ignition[666]: Ignition 2.19.0 Feb 13 19:56:38.924918 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:56:38.926898 ignition[666]: Stage: fetch-offline Feb 13 19:56:38.925323 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:56:38.926929 ignition[666]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:38.925327 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:56:38.926937 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:56:38.926200 systemd-networkd[767]: eth0: Link UP Feb 13 19:56:38.927116 ignition[666]: parsed url from cmdline: "" Feb 13 19:56:38.926203 systemd-networkd[767]: eth0: Gained carrier Feb 13 19:56:38.927120 ignition[666]: no config URL provided Feb 13 19:56:38.926209 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:56:38.927124 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:56:38.926396 systemd[1]: Reached target network.target - Network. Feb 13 19:56:38.927132 ignition[666]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:56:38.937059 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:56:38.927153 ignition[666]: op(1): [started] loading QEMU firmware config module Feb 13 19:56:38.927158 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:56:38.936898 ignition[666]: op(1): [finished] loading QEMU firmware config module Feb 13 19:56:38.965338 ignition[666]: parsing config with SHA512: 6aabfa2aaae507324b6e56f473ad99b9f66b744062232cb0cac331596e498007f1de7c04af09cc73a1a008dd16ed813fd64e0807f59b8d22321e137cd414047e Feb 13 19:56:38.970889 unknown[666]: fetched base config from "system" Feb 13 19:56:38.971009 unknown[666]: fetched user config from "qemu" Feb 13 19:56:38.971562 ignition[666]: fetch-offline: fetch-offline passed Feb 13 19:56:38.973209 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:56:38.971631 ignition[666]: Ignition finished successfully Feb 13 19:56:38.974831 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:56:38.985195 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:56:38.995895 ignition[773]: Ignition 2.19.0 Feb 13 19:56:38.995907 ignition[773]: Stage: kargs Feb 13 19:56:38.996100 ignition[773]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:38.996110 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:56:38.996939 ignition[773]: kargs: kargs passed Feb 13 19:56:38.999674 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:56:38.996987 ignition[773]: Ignition finished successfully Feb 13 19:56:39.009175 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:56:39.018337 ignition[781]: Ignition 2.19.0 Feb 13 19:56:39.018347 ignition[781]: Stage: disks Feb 13 19:56:39.018518 ignition[781]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:39.021241 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:56:39.018528 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:56:39.022386 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:56:39.019340 ignition[781]: disks: disks passed Feb 13 19:56:39.024081 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:56:39.019393 ignition[781]: Ignition finished successfully Feb 13 19:56:39.026095 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:56:39.027884 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:56:39.029322 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:56:39.031890 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:56:39.045056 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:56:39.048640 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:56:39.062129 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:56:39.102050 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 19:56:39.102069 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:56:39.103313 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:56:39.113099 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:56:39.114887 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:56:39.117290 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:56:39.117339 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:56:39.123115 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Feb 13 19:56:39.117364 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:56:39.127678 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:56:39.127697 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:56:39.127710 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:56:39.122052 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:56:39.125840 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:56:39.131478 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:56:39.131743 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:56:39.166427 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:56:39.170660 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:56:39.174715 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:56:39.178694 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:56:39.243915 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:56:39.251092 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:56:39.252554 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:56:39.259060 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:56:39.274074 ignition[915]: INFO : Ignition 2.19.0 Feb 13 19:56:39.274074 ignition[915]: INFO : Stage: mount Feb 13 19:56:39.275674 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:39.275674 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:56:39.275674 ignition[915]: INFO : mount: mount passed Feb 13 19:56:39.275674 ignition[915]: INFO : Ignition finished successfully Feb 13 19:56:39.275278 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:56:39.277730 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:56:39.287137 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:56:39.773702 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:56:39.782202 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:56:39.788816 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Feb 13 19:56:39.788850 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:56:39.788861 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:56:39.790452 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:56:39.793043 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:56:39.793761 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:56:39.811637 ignition[944]: INFO : Ignition 2.19.0 Feb 13 19:56:39.813735 ignition[944]: INFO : Stage: files Feb 13 19:56:39.813735 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:39.813735 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:56:39.813735 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:56:39.817973 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:56:39.817973 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:56:39.817973 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:56:39.817973 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:56:39.817973 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:56:39.817288 unknown[944]: wrote ssh authorized keys file for user: core Feb 13 19:56:39.825464 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:56:39.825464 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:56:39.873585 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:56:40.044306 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:56:40.044306 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:56:40.048206 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:56:40.048206 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:56:40.048206 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:56:40.048206 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:56:40.048206 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:56:40.048206 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:56:40.048206 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:56:40.048206 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:56:40.048206 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:56:40.048206 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:56:40.048206 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:56:40.048206 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:56:40.048206 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:56:40.272096 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:56:40.464879 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:56:40.464879 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:56:40.468496 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:56:40.468496 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:56:40.468496 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:56:40.468496 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 19:56:40.468496 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:56:40.468496 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:56:40.468496 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 19:56:40.468496 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:56:40.491608 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:56:40.495765 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:56:40.495765 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:56:40.495765 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:56:40.495765 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:56:40.502952 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:56:40.502952 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:56:40.502952 ignition[944]: INFO : files: files passed Feb 13 19:56:40.502952 ignition[944]: INFO : Ignition finished successfully Feb 13 19:56:40.499077 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:56:40.514187 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:56:40.516560 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:56:40.517913 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:56:40.517990 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:56:40.524007 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:56:40.527222 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:56:40.527222 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:56:40.530483 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:56:40.531241 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:56:40.533520 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:56:40.544157 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:56:40.562046 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:56:40.563043 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:56:40.565265 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:56:40.566282 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:56:40.568010 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:56:40.579206 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:56:40.590126 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:56:40.593214 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:56:40.602920 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:56:40.604163 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:56:40.606213 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:56:40.607932 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:56:40.608062 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:56:40.610553 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:56:40.612512 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:56:40.614120 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:56:40.615809 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:56:40.617809 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:56:40.619772 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:56:40.621587 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:56:40.623502 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:56:40.625427 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:56:40.627109 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:56:40.628616 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:56:40.628729 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:56:40.630992 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:56:40.632954 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:56:40.634893 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:56:40.638104 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:56:40.639369 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:56:40.639490 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:56:40.642225 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:56:40.642338 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:56:40.644283 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:56:40.645829 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:56:40.645922 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:56:40.647943 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:56:40.649510 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:56:40.651196 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:56:40.651287 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:56:40.653325 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:56:40.653422 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:56:40.654938 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:56:40.655056 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:56:40.656786 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:56:40.656892 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:56:40.665239 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:56:40.666964 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:56:40.667129 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:56:40.669797 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:56:40.670815 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:56:40.670950 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:56:40.672800 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:56:40.676549 ignition[998]: INFO : Ignition 2.19.0 Feb 13 19:56:40.676549 ignition[998]: INFO : Stage: umount Feb 13 19:56:40.676549 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:56:40.676549 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:56:40.676549 ignition[998]: INFO : umount: umount passed Feb 13 19:56:40.676549 ignition[998]: INFO : Ignition finished successfully Feb 13 19:56:40.672893 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:56:40.678152 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:56:40.678246 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:56:40.680125 systemd-networkd[767]: eth0: Gained IPv6LL Feb 13 19:56:40.680641 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:56:40.681107 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:56:40.684193 systemd[1]: Stopped target network.target - Network. Feb 13 19:56:40.686036 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:56:40.686092 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:56:40.688183 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:56:40.688232 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:56:40.690525 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:56:40.690573 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:56:40.691988 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:56:40.692059 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:56:40.693979 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:56:40.695573 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:56:40.698012 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:56:40.703464 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:56:40.705055 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:56:40.706069 systemd-networkd[767]: eth0: DHCPv6 lease lost Feb 13 19:56:40.707232 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:56:40.707287 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:56:40.708864 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:56:40.708967 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:56:40.711290 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:56:40.711345 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:56:40.722187 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:56:40.723067 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:56:40.723127 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:56:40.725116 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:56:40.725159 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:56:40.726949 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:56:40.726991 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:56:40.729219 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:56:40.738230 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:56:40.738347 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:56:40.743119 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:56:40.743226 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:56:40.744987 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:56:40.745098 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:56:40.749697 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:56:40.749840 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:56:40.752061 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:56:40.752101 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:56:40.753162 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:56:40.753193 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:56:40.755143 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:56:40.755184 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:56:40.757847 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:56:40.757889 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:56:40.760619 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:56:40.760662 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:56:40.776203 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:56:40.777220 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:56:40.777274 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:56:40.779357 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:56:40.779409 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:56:40.781496 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:56:40.781570 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:56:40.783714 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:56:40.786778 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:56:40.795647 systemd[1]: Switching root. Feb 13 19:56:40.823085 systemd-journald[238]: Journal stopped Feb 13 19:56:41.533015 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 19:56:41.533083 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:56:41.533095 kernel: SELinux: policy capability open_perms=1 Feb 13 19:56:41.533105 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:56:41.533115 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:56:41.533125 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:56:41.533138 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:56:41.533148 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:56:41.533158 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:56:41.533169 kernel: audit: type=1403 audit(1739476600.958:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:56:41.533180 systemd[1]: Successfully loaded SELinux policy in 32.726ms. Feb 13 19:56:41.533200 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.180ms. Feb 13 19:56:41.533215 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:56:41.533229 systemd[1]: Detected virtualization kvm. Feb 13 19:56:41.533240 systemd[1]: Detected architecture arm64. Feb 13 19:56:41.533252 systemd[1]: Detected first boot. Feb 13 19:56:41.533263 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:56:41.533274 zram_generator::config[1042]: No configuration found. Feb 13 19:56:41.533286 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:56:41.533297 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:56:41.533307 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:56:41.533318 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:56:41.533329 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:56:41.533341 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:56:41.533352 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:56:41.533372 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:56:41.533384 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:56:41.533395 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:56:41.533405 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:56:41.533416 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:56:41.533426 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:56:41.533437 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:56:41.533450 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:56:41.533460 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:56:41.533473 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:56:41.533484 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:56:41.533494 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:56:41.533505 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:56:41.533516 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:56:41.533530 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:56:41.533542 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:56:41.533552 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:56:41.533563 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:56:41.533574 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:56:41.533585 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:56:41.533595 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:56:41.533606 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:56:41.533618 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:56:41.533630 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:56:41.533641 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:56:41.533652 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:56:41.533662 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:56:41.533672 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:56:41.533683 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:56:41.533693 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:56:41.533703 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:56:41.533714 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:56:41.533726 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:56:41.533737 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:56:41.533749 systemd[1]: Reached target machines.target - Containers. Feb 13 19:56:41.533759 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:56:41.533770 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:56:41.533781 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:56:41.533793 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:56:41.533804 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:56:41.533815 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:56:41.533827 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:56:41.533838 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:56:41.533849 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:56:41.533859 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:56:41.533870 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:56:41.533881 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:56:41.533891 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:56:41.533901 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:56:41.533913 kernel: fuse: init (API version 7.39) Feb 13 19:56:41.533923 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:56:41.533934 kernel: loop: module loaded Feb 13 19:56:41.533943 kernel: ACPI: bus type drm_connector registered Feb 13 19:56:41.533953 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:56:41.533963 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:56:41.533974 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:56:41.533984 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:56:41.533995 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:56:41.534006 systemd[1]: Stopped verity-setup.service. Feb 13 19:56:41.534026 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:56:41.534038 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:56:41.534049 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:56:41.534059 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:56:41.534089 systemd-journald[1113]: Collecting audit messages is disabled. Feb 13 19:56:41.534111 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:56:41.534122 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:56:41.534133 systemd-journald[1113]: Journal started Feb 13 19:56:41.534154 systemd-journald[1113]: Runtime Journal (/run/log/journal/a4422b575b9e4561b207b0b1fc458b2b) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:56:41.326896 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:56:41.343949 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:56:41.344316 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:56:41.536063 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:56:41.536740 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:56:41.538209 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:56:41.539663 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:56:41.539795 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:56:41.541280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:56:41.541431 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:56:41.542777 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:56:41.542911 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:56:41.544250 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:56:41.544390 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:56:41.545982 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:56:41.546146 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:56:41.547469 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:56:41.547608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:56:41.548930 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:56:41.550454 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:56:41.551901 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:56:41.564090 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:56:41.573175 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:56:41.575179 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:56:41.576257 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:56:41.576298 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:56:41.578302 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:56:41.580499 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:56:41.582533 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:56:41.583617 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:56:41.585126 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:56:41.586998 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:56:41.588207 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:56:41.592181 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:56:41.593314 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:56:41.594152 systemd-journald[1113]: Time spent on flushing to /var/log/journal/a4422b575b9e4561b207b0b1fc458b2b is 24.023ms for 851 entries. Feb 13 19:56:41.594152 systemd-journald[1113]: System Journal (/var/log/journal/a4422b575b9e4561b207b0b1fc458b2b) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:56:41.626802 systemd-journald[1113]: Received client request to flush runtime journal. Feb 13 19:56:41.596563 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:56:41.600269 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:56:41.602921 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:56:41.608067 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:56:41.611419 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:56:41.612755 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:56:41.614270 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:56:41.615882 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:56:41.622487 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:56:41.623886 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:56:41.630206 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:56:41.636447 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:56:41.637500 kernel: loop0: detected capacity change from 0 to 114432 Feb 13 19:56:41.638815 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:56:41.646709 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:56:41.647404 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:56:41.653436 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:56:41.659068 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:56:41.663346 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:56:41.664900 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:56:41.679542 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Feb 13 19:56:41.679562 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Feb 13 19:56:41.684424 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:56:41.688037 kernel: loop1: detected capacity change from 0 to 114328 Feb 13 19:56:41.724047 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 19:56:41.755046 kernel: loop3: detected capacity change from 0 to 114432 Feb 13 19:56:41.759714 kernel: loop4: detected capacity change from 0 to 114328 Feb 13 19:56:41.764042 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 19:56:41.767058 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:56:41.767449 (sd-merge)[1180]: Merged extensions into '/usr'. Feb 13 19:56:41.772767 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:56:41.772781 systemd[1]: Reloading... Feb 13 19:56:41.811048 zram_generator::config[1206]: No configuration found. Feb 13 19:56:41.870426 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:56:41.938255 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:56:41.982865 systemd[1]: Reloading finished in 209 ms. Feb 13 19:56:42.010696 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:56:42.012291 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:56:42.028195 systemd[1]: Starting ensure-sysext.service... Feb 13 19:56:42.030153 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:56:42.037382 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:56:42.037400 systemd[1]: Reloading... Feb 13 19:56:42.047660 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:56:42.047919 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:56:42.048569 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:56:42.048783 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Feb 13 19:56:42.048833 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Feb 13 19:56:42.050921 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:56:42.050936 systemd-tmpfiles[1241]: Skipping /boot Feb 13 19:56:42.057915 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:56:42.057932 systemd-tmpfiles[1241]: Skipping /boot Feb 13 19:56:42.092049 zram_generator::config[1268]: No configuration found. Feb 13 19:56:42.180394 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:56:42.224577 systemd[1]: Reloading finished in 186 ms. Feb 13 19:56:42.241158 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:56:42.255428 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:56:42.262940 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:56:42.265500 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:56:42.267731 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:56:42.271264 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:56:42.275309 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:56:42.279451 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:56:42.284539 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:56:42.285795 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:56:42.289069 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:56:42.295853 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:56:42.297427 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:56:42.299125 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:56:42.299275 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:56:42.301052 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:56:42.301280 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:56:42.302931 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:56:42.304910 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:56:42.313588 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:56:42.314858 systemd-udevd[1310]: Using default interface naming scheme 'v255'. Feb 13 19:56:42.316610 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:56:42.319169 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:56:42.325285 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:56:42.329120 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:56:42.331374 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:56:42.332457 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:56:42.333684 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:56:42.338592 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:56:42.341612 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:56:42.343105 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:56:42.344756 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:56:42.344881 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:56:42.348580 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:56:42.348742 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:56:42.350494 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:56:42.354580 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:56:42.359156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:56:42.363858 augenrules[1343]: No rules Feb 13 19:56:42.370283 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:56:42.374638 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:56:42.379534 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:56:42.382416 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:56:42.383885 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:56:42.386708 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:56:42.390989 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:56:42.392765 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:56:42.396484 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:56:42.398177 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:56:42.398304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:56:42.399958 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:56:42.400098 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:56:42.401495 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:56:42.401610 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:56:42.403616 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:56:42.403733 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:56:42.411213 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1358) Feb 13 19:56:42.409494 systemd[1]: Finished ensure-sysext.service. Feb 13 19:56:42.422083 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:56:42.428305 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:56:42.428374 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:56:42.441181 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:56:42.442555 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:56:42.463099 systemd-resolved[1308]: Positive Trust Anchors: Feb 13 19:56:42.463117 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:56:42.463149 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:56:42.470691 systemd-resolved[1308]: Defaulting to hostname 'linux'. Feb 13 19:56:42.475744 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:56:42.477137 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:56:42.482473 systemd-networkd[1374]: lo: Link UP Feb 13 19:56:42.482483 systemd-networkd[1374]: lo: Gained carrier Feb 13 19:56:42.483230 systemd-networkd[1374]: Enumeration completed Feb 13 19:56:42.483320 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:56:42.484492 systemd[1]: Reached target network.target - Network. Feb 13 19:56:42.486060 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:56:42.486070 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:56:42.486955 systemd-networkd[1374]: eth0: Link UP Feb 13 19:56:42.486970 systemd-networkd[1374]: eth0: Gained carrier Feb 13 19:56:42.486984 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:56:42.493195 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:56:42.493555 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:56:42.495786 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:56:42.497119 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:56:42.499060 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:56:42.501070 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:56:42.502099 systemd-networkd[1374]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:56:42.504044 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Feb 13 19:56:42.504977 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:56:42.505043 systemd-timesyncd[1385]: Initial clock synchronization to Thu 2025-02-13 19:56:42.163019 UTC. Feb 13 19:56:42.533274 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:56:42.534748 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:56:42.545271 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:56:42.548247 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:56:42.572058 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:56:42.573750 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:56:42.608522 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:56:42.609986 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:56:42.611115 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:56:42.612256 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:56:42.613474 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:56:42.614865 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:56:42.616051 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:56:42.617253 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:56:42.618550 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:56:42.618585 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:56:42.619471 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:56:42.621111 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:56:42.623443 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:56:42.631906 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:56:42.634152 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:56:42.635699 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:56:42.636878 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:56:42.637825 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:56:42.638786 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:56:42.638817 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:56:42.639712 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:56:42.644041 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:56:42.641669 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:56:42.644652 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:56:42.650181 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:56:42.651248 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:56:42.652232 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:56:42.654184 jq[1413]: false Feb 13 19:56:42.656580 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:56:42.659316 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:56:42.662298 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:56:42.666869 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:56:42.670681 extend-filesystems[1414]: Found loop3 Feb 13 19:56:42.670681 extend-filesystems[1414]: Found loop4 Feb 13 19:56:42.670681 extend-filesystems[1414]: Found loop5 Feb 13 19:56:42.670681 extend-filesystems[1414]: Found vda Feb 13 19:56:42.670681 extend-filesystems[1414]: Found vda1 Feb 13 19:56:42.670681 extend-filesystems[1414]: Found vda2 Feb 13 19:56:42.670681 extend-filesystems[1414]: Found vda3 Feb 13 19:56:42.670681 extend-filesystems[1414]: Found usr Feb 13 19:56:42.670681 extend-filesystems[1414]: Found vda4 Feb 13 19:56:42.670681 extend-filesystems[1414]: Found vda6 Feb 13 19:56:42.670681 extend-filesystems[1414]: Found vda7 Feb 13 19:56:42.670681 extend-filesystems[1414]: Found vda9 Feb 13 19:56:42.670681 extend-filesystems[1414]: Checking size of /dev/vda9 Feb 13 19:56:42.713183 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1356) Feb 13 19:56:42.713212 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:56:42.674038 dbus-daemon[1412]: [system] SELinux support is enabled Feb 13 19:56:42.713561 extend-filesystems[1414]: Resized partition /dev/vda9 Feb 13 19:56:42.671855 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:56:42.715980 extend-filesystems[1434]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:56:42.672261 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:56:42.718078 jq[1432]: true Feb 13 19:56:42.674161 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:56:42.678772 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:56:42.681157 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:56:42.684793 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:56:42.718573 jq[1438]: true Feb 13 19:56:42.689365 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:56:42.689508 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:56:42.689744 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:56:42.689870 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:56:42.694987 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:56:42.695152 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:56:42.724984 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:56:42.725045 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:56:42.726089 (ntainerd)[1446]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:56:42.726878 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:56:42.726931 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:56:42.736108 tar[1437]: linux-arm64/helm Feb 13 19:56:42.747028 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:56:42.748263 systemd-logind[1425]: New seat seat0. Feb 13 19:56:42.755869 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:56:42.763486 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:56:42.764422 update_engine[1428]: I20250213 19:56:42.764199 1428 main.cc:92] Flatcar Update Engine starting Feb 13 19:56:42.773128 update_engine[1428]: I20250213 19:56:42.767802 1428 update_check_scheduler.cc:74] Next update check in 9m35s Feb 13 19:56:42.767243 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:56:42.778302 extend-filesystems[1434]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:56:42.778302 extend-filesystems[1434]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:56:42.778302 extend-filesystems[1434]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:56:42.790103 extend-filesystems[1414]: Resized filesystem in /dev/vda9 Feb 13 19:56:42.780753 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:56:42.785816 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:56:42.785959 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:56:42.807963 bash[1466]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:56:42.808570 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:56:42.810437 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:56:42.828243 locksmithd[1465]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:56:42.929378 containerd[1446]: time="2025-02-13T19:56:42.929274160Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:56:42.954860 containerd[1446]: time="2025-02-13T19:56:42.954816920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:56:42.956518 containerd[1446]: time="2025-02-13T19:56:42.956229360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:56:42.956518 containerd[1446]: time="2025-02-13T19:56:42.956272240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:56:42.956518 containerd[1446]: time="2025-02-13T19:56:42.956288880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:56:42.956518 containerd[1446]: time="2025-02-13T19:56:42.956446640Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:56:42.956518 containerd[1446]: time="2025-02-13T19:56:42.956464400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:56:42.956518 containerd[1446]: time="2025-02-13T19:56:42.956517120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:56:42.956518 containerd[1446]: time="2025-02-13T19:56:42.956530080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:56:42.956703 containerd[1446]: time="2025-02-13T19:56:42.956673000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:56:42.956703 containerd[1446]: time="2025-02-13T19:56:42.956688440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:56:42.956703 containerd[1446]: time="2025-02-13T19:56:42.956700200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:56:42.956761 containerd[1446]: time="2025-02-13T19:56:42.956714000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:56:42.956823 containerd[1446]: time="2025-02-13T19:56:42.956782640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:56:42.957026 containerd[1446]: time="2025-02-13T19:56:42.956980880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:56:42.957121 containerd[1446]: time="2025-02-13T19:56:42.957095240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:56:42.957121 containerd[1446]: time="2025-02-13T19:56:42.957114720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:56:42.957206 containerd[1446]: time="2025-02-13T19:56:42.957191400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:56:42.957246 containerd[1446]: time="2025-02-13T19:56:42.957234040Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:56:42.960889 containerd[1446]: time="2025-02-13T19:56:42.960767320Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:56:42.960889 containerd[1446]: time="2025-02-13T19:56:42.960824440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:56:42.960889 containerd[1446]: time="2025-02-13T19:56:42.960843480Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:56:42.960889 containerd[1446]: time="2025-02-13T19:56:42.960858960Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:56:42.960889 containerd[1446]: time="2025-02-13T19:56:42.960871720Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:56:42.961042 containerd[1446]: time="2025-02-13T19:56:42.960993720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:56:42.961390 containerd[1446]: time="2025-02-13T19:56:42.961366840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:56:42.961542 containerd[1446]: time="2025-02-13T19:56:42.961518680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:56:42.961568 containerd[1446]: time="2025-02-13T19:56:42.961542760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:56:42.961568 containerd[1446]: time="2025-02-13T19:56:42.961559000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:56:42.961633 containerd[1446]: time="2025-02-13T19:56:42.961619120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:56:42.961653 containerd[1446]: time="2025-02-13T19:56:42.961639000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:56:42.961674 containerd[1446]: time="2025-02-13T19:56:42.961651440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:56:42.961674 containerd[1446]: time="2025-02-13T19:56:42.961665280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:56:42.961706 containerd[1446]: time="2025-02-13T19:56:42.961678640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:56:42.961706 containerd[1446]: time="2025-02-13T19:56:42.961691040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:56:42.961706 containerd[1446]: time="2025-02-13T19:56:42.961703000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:56:42.961758 containerd[1446]: time="2025-02-13T19:56:42.961713760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:56:42.961822 containerd[1446]: time="2025-02-13T19:56:42.961780400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:56:42.961841 containerd[1446]: time="2025-02-13T19:56:42.961829840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:56:42.961862 containerd[1446]: time="2025-02-13T19:56:42.961842480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:56:42.961862 containerd[1446]: time="2025-02-13T19:56:42.961856360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:56:42.961899 containerd[1446]: time="2025-02-13T19:56:42.961868360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:56:42.961899 containerd[1446]: time="2025-02-13T19:56:42.961881920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:56:42.961944 containerd[1446]: time="2025-02-13T19:56:42.961893920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:56:42.961963 containerd[1446]: time="2025-02-13T19:56:42.961953040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:56:42.961981 containerd[1446]: time="2025-02-13T19:56:42.961967200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:56:42.962002 containerd[1446]: time="2025-02-13T19:56:42.961980920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:56:42.962002 containerd[1446]: time="2025-02-13T19:56:42.961996760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:56:42.962053 containerd[1446]: time="2025-02-13T19:56:42.962009400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:56:42.962053 containerd[1446]: time="2025-02-13T19:56:42.962038280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:56:42.962086 containerd[1446]: time="2025-02-13T19:56:42.962059920Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:56:42.962137 containerd[1446]: time="2025-02-13T19:56:42.962123120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:56:42.962156 containerd[1446]: time="2025-02-13T19:56:42.962142760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:56:42.962180 containerd[1446]: time="2025-02-13T19:56:42.962154880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:56:42.962311 containerd[1446]: time="2025-02-13T19:56:42.962296120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:56:42.962343 containerd[1446]: time="2025-02-13T19:56:42.962319800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:56:42.962343 containerd[1446]: time="2025-02-13T19:56:42.962330320Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:56:42.962387 containerd[1446]: time="2025-02-13T19:56:42.962341520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:56:42.962387 containerd[1446]: time="2025-02-13T19:56:42.962358800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:56:42.962387 containerd[1446]: time="2025-02-13T19:56:42.962373200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:56:42.962387 containerd[1446]: time="2025-02-13T19:56:42.962382800Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:56:42.962460 containerd[1446]: time="2025-02-13T19:56:42.962392760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:56:42.962858 containerd[1446]: time="2025-02-13T19:56:42.962786040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:56:42.962858 containerd[1446]: time="2025-02-13T19:56:42.962849120Z" level=info msg="Connect containerd service" Feb 13 19:56:42.962997 containerd[1446]: time="2025-02-13T19:56:42.962877360Z" level=info msg="using legacy CRI server" Feb 13 19:56:42.962997 containerd[1446]: time="2025-02-13T19:56:42.962884040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:56:42.963803 containerd[1446]: time="2025-02-13T19:56:42.963050080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:56:42.966817 containerd[1446]: time="2025-02-13T19:56:42.964686840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:56:42.966817 containerd[1446]: time="2025-02-13T19:56:42.964864480Z" level=info msg="Start subscribing containerd event" Feb 13 19:56:42.966817 containerd[1446]: time="2025-02-13T19:56:42.964901040Z" level=info msg="Start recovering state" Feb 13 19:56:42.966817 containerd[1446]: time="2025-02-13T19:56:42.964975360Z" level=info msg="Start event monitor" Feb 13 19:56:42.966817 containerd[1446]: time="2025-02-13T19:56:42.964986560Z" level=info msg="Start snapshots syncer" Feb 13 19:56:42.966817 containerd[1446]: time="2025-02-13T19:56:42.964998880Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:56:42.966817 containerd[1446]: time="2025-02-13T19:56:42.965007360Z" level=info msg="Start streaming server" Feb 13 19:56:42.966817 containerd[1446]: time="2025-02-13T19:56:42.965567040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:56:42.966817 containerd[1446]: time="2025-02-13T19:56:42.965613240Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:56:42.966817 containerd[1446]: time="2025-02-13T19:56:42.965662000Z" level=info msg="containerd successfully booted in 0.038217s" Feb 13 19:56:42.965787 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:56:43.100301 tar[1437]: linux-arm64/LICENSE Feb 13 19:56:43.100391 tar[1437]: linux-arm64/README.md Feb 13 19:56:43.117095 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:56:43.154231 sshd_keygen[1435]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:56:43.172097 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:56:43.184521 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:56:43.189251 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:56:43.191067 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:56:43.193437 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:56:43.203946 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:56:43.206464 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:56:43.208328 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:56:43.209686 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:56:44.135218 systemd-networkd[1374]: eth0: Gained IPv6LL Feb 13 19:56:44.137590 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:56:44.139361 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:56:44.153257 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:56:44.155491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:56:44.157451 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:56:44.170857 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:56:44.171098 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:56:44.172514 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:56:44.177368 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:56:44.622680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:56:44.624327 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:56:44.626179 systemd[1]: Startup finished in 568ms (kernel) + 4.254s (initrd) + 3.700s (userspace) = 8.523s. Feb 13 19:56:44.626216 (kubelet)[1524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:56:45.063078 kubelet[1524]: E0213 19:56:45.062949 1524 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:56:45.065527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:56:45.065677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:56:49.632635 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:56:49.633755 systemd[1]: Started sshd@0-10.0.0.134:22-10.0.0.1:60562.service - OpenSSH per-connection server daemon (10.0.0.1:60562). Feb 13 19:56:49.687702 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 60562 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:49.689405 sshd[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:49.700640 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:56:49.710291 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:56:49.712077 systemd-logind[1425]: New session 1 of user core. Feb 13 19:56:49.718912 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:56:49.720988 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:56:49.727091 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:56:49.800097 systemd[1542]: Queued start job for default target default.target. Feb 13 19:56:49.808863 systemd[1542]: Created slice app.slice - User Application Slice. Feb 13 19:56:49.808905 systemd[1542]: Reached target paths.target - Paths. Feb 13 19:56:49.808917 systemd[1542]: Reached target timers.target - Timers. Feb 13 19:56:49.810096 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:56:49.818712 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:56:49.818770 systemd[1542]: Reached target sockets.target - Sockets. Feb 13 19:56:49.818782 systemd[1542]: Reached target basic.target - Basic System. Feb 13 19:56:49.818815 systemd[1542]: Reached target default.target - Main User Target. Feb 13 19:56:49.818839 systemd[1542]: Startup finished in 86ms. Feb 13 19:56:49.819061 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:56:49.820307 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:56:49.886615 systemd[1]: Started sshd@1-10.0.0.134:22-10.0.0.1:60576.service - OpenSSH per-connection server daemon (10.0.0.1:60576). Feb 13 19:56:49.925462 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 60576 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:49.926745 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:49.930387 systemd-logind[1425]: New session 2 of user core. Feb 13 19:56:49.940192 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:56:49.990304 sshd[1553]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:50.000426 systemd[1]: sshd@1-10.0.0.134:22-10.0.0.1:60576.service: Deactivated successfully. Feb 13 19:56:50.001806 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:56:50.002978 systemd-logind[1425]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:56:50.005063 systemd[1]: Started sshd@2-10.0.0.134:22-10.0.0.1:60578.service - OpenSSH per-connection server daemon (10.0.0.1:60578). Feb 13 19:56:50.005845 systemd-logind[1425]: Removed session 2. Feb 13 19:56:50.042538 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 60578 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:50.043724 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:50.047144 systemd-logind[1425]: New session 3 of user core. Feb 13 19:56:50.054504 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:56:50.101788 sshd[1560]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:50.110324 systemd[1]: sshd@2-10.0.0.134:22-10.0.0.1:60578.service: Deactivated successfully. Feb 13 19:56:50.112303 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:56:50.113872 systemd-logind[1425]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:56:50.115711 systemd[1]: Started sshd@3-10.0.0.134:22-10.0.0.1:60590.service - OpenSSH per-connection server daemon (10.0.0.1:60590). Feb 13 19:56:50.116141 systemd-logind[1425]: Removed session 3. Feb 13 19:56:50.154110 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 60590 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:50.155478 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:50.159073 systemd-logind[1425]: New session 4 of user core. Feb 13 19:56:50.174140 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:56:50.224060 sshd[1567]: pam_unix(sshd:session): session closed for user core Feb 13 19:56:50.233284 systemd[1]: sshd@3-10.0.0.134:22-10.0.0.1:60590.service: Deactivated successfully. Feb 13 19:56:50.234614 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:56:50.235737 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:56:50.236837 systemd[1]: Started sshd@4-10.0.0.134:22-10.0.0.1:60594.service - OpenSSH per-connection server daemon (10.0.0.1:60594). Feb 13 19:56:50.237532 systemd-logind[1425]: Removed session 4. Feb 13 19:56:50.275292 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 60594 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:56:50.276475 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:56:50.279857 systemd-logind[1425]: New session 5 of user core. Feb 13 19:56:50.292145 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:56:50.349806 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:56:50.350089 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:56:50.661241 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:56:50.661472 (dockerd)[1595]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:56:50.915071 dockerd[1595]: time="2025-02-13T19:56:50.914921785Z" level=info msg="Starting up" Feb 13 19:56:51.050282 dockerd[1595]: time="2025-02-13T19:56:51.050236111Z" level=info msg="Loading containers: start." Feb 13 19:56:51.138033 kernel: Initializing XFRM netlink socket Feb 13 19:56:51.194722 systemd-networkd[1374]: docker0: Link UP Feb 13 19:56:51.215264 dockerd[1595]: time="2025-02-13T19:56:51.215177657Z" level=info msg="Loading containers: done." Feb 13 19:56:51.227736 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck162617405-merged.mount: Deactivated successfully. Feb 13 19:56:51.227922 dockerd[1595]: time="2025-02-13T19:56:51.227863807Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:56:51.227999 dockerd[1595]: time="2025-02-13T19:56:51.227962614Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 19:56:51.228112 dockerd[1595]: time="2025-02-13T19:56:51.228072930Z" level=info msg="Daemon has completed initialization" Feb 13 19:56:51.252795 dockerd[1595]: time="2025-02-13T19:56:51.252578900Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:56:51.253080 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:56:51.873156 containerd[1446]: time="2025-02-13T19:56:51.873099761Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:56:52.592609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191936816.mount: Deactivated successfully. Feb 13 19:56:54.490038 containerd[1446]: time="2025-02-13T19:56:54.489979629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:54.490896 containerd[1446]: time="2025-02-13T19:56:54.490467542Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 19:56:54.491557 containerd[1446]: time="2025-02-13T19:56:54.491526066Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:54.494574 containerd[1446]: time="2025-02-13T19:56:54.494544837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:54.495802 containerd[1446]: time="2025-02-13T19:56:54.495662414Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.622509794s" Feb 13 19:56:54.495802 containerd[1446]: time="2025-02-13T19:56:54.495692951Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 19:56:54.513871 containerd[1446]: time="2025-02-13T19:56:54.513817257Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:56:55.315972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:56:55.327182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:56:55.420210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:56:55.423783 (kubelet)[1818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:56:55.462453 kubelet[1818]: E0213 19:56:55.462356 1818 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:56:55.465457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:56:55.465660 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:56:56.587830 containerd[1446]: time="2025-02-13T19:56:56.587576141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:56.588815 containerd[1446]: time="2025-02-13T19:56:56.588780641Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 19:56:56.589722 containerd[1446]: time="2025-02-13T19:56:56.589299543Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:56.594971 containerd[1446]: time="2025-02-13T19:56:56.594934741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:56.596302 containerd[1446]: time="2025-02-13T19:56:56.596264372Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 2.08241081s" Feb 13 19:56:56.596302 containerd[1446]: time="2025-02-13T19:56:56.596300180Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 19:56:56.616236 containerd[1446]: time="2025-02-13T19:56:56.616193384Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:56:58.322289 containerd[1446]: time="2025-02-13T19:56:58.322230423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:58.322816 containerd[1446]: time="2025-02-13T19:56:58.322780235Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 19:56:58.323525 containerd[1446]: time="2025-02-13T19:56:58.323496799Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:58.326281 containerd[1446]: time="2025-02-13T19:56:58.326246850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:58.327503 containerd[1446]: time="2025-02-13T19:56:58.327467650Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.711233082s" Feb 13 19:56:58.327532 containerd[1446]: time="2025-02-13T19:56:58.327504994Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 19:56:58.345236 containerd[1446]: time="2025-02-13T19:56:58.345122541Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:56:59.428922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2338585615.mount: Deactivated successfully. Feb 13 19:56:59.855827 containerd[1446]: time="2025-02-13T19:56:59.855699192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:59.856263 containerd[1446]: time="2025-02-13T19:56:59.856211196Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 19:56:59.857075 containerd[1446]: time="2025-02-13T19:56:59.857038600Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:59.858866 containerd[1446]: time="2025-02-13T19:56:59.858817539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:56:59.859698 containerd[1446]: time="2025-02-13T19:56:59.859666076Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.514506385s" Feb 13 19:56:59.859740 containerd[1446]: time="2025-02-13T19:56:59.859701934Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:56:59.876908 containerd[1446]: time="2025-02-13T19:56:59.876879588Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:57:00.535324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1690160432.mount: Deactivated successfully. Feb 13 19:57:01.255047 containerd[1446]: time="2025-02-13T19:57:01.254894295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:01.255947 containerd[1446]: time="2025-02-13T19:57:01.255457352Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 19:57:01.256555 containerd[1446]: time="2025-02-13T19:57:01.256524892Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:01.259469 containerd[1446]: time="2025-02-13T19:57:01.259438753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:01.261126 containerd[1446]: time="2025-02-13T19:57:01.261048391Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.384133133s" Feb 13 19:57:01.261126 containerd[1446]: time="2025-02-13T19:57:01.261086523Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:57:01.278667 containerd[1446]: time="2025-02-13T19:57:01.278637584Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:57:01.866270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount293821437.mount: Deactivated successfully. Feb 13 19:57:01.870497 containerd[1446]: time="2025-02-13T19:57:01.870454521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:01.873873 containerd[1446]: time="2025-02-13T19:57:01.873830629Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 19:57:01.874570 containerd[1446]: time="2025-02-13T19:57:01.874529957Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:01.876677 containerd[1446]: time="2025-02-13T19:57:01.876625392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:01.877527 containerd[1446]: time="2025-02-13T19:57:01.877490477Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 598.807549ms" Feb 13 19:57:01.877582 containerd[1446]: time="2025-02-13T19:57:01.877525262Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 19:57:01.896531 containerd[1446]: time="2025-02-13T19:57:01.896501038Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:57:02.629282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2371174412.mount: Deactivated successfully. Feb 13 19:57:05.611456 containerd[1446]: time="2025-02-13T19:57:05.611256108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:05.612373 containerd[1446]: time="2025-02-13T19:57:05.612125974Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 19:57:05.613099 containerd[1446]: time="2025-02-13T19:57:05.613063008Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:05.616533 containerd[1446]: time="2025-02-13T19:57:05.616481330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:05.618217 containerd[1446]: time="2025-02-13T19:57:05.618074315Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.721539035s" Feb 13 19:57:05.618217 containerd[1446]: time="2025-02-13T19:57:05.618114503Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 19:57:05.648920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:57:05.657245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:57:05.748307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:57:05.751864 (kubelet)[1986]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:57:05.788009 kubelet[1986]: E0213 19:57:05.787961 1986 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:57:05.790712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:57:05.790866 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:57:09.657451 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:57:09.667252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:57:09.682127 systemd[1]: Reloading requested from client PID 2066 ('systemctl') (unit session-5.scope)... Feb 13 19:57:09.682143 systemd[1]: Reloading... Feb 13 19:57:09.747125 zram_generator::config[2105]: No configuration found. Feb 13 19:57:09.865507 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:57:09.931582 systemd[1]: Reloading finished in 249 ms. Feb 13 19:57:09.981857 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:57:09.984190 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:57:09.984374 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:57:09.985754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:57:10.074219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:57:10.078061 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:57:10.112912 kubelet[2152]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:57:10.112912 kubelet[2152]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:57:10.112912 kubelet[2152]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:57:10.115291 kubelet[2152]: I0213 19:57:10.115246 2152 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:57:11.075282 kubelet[2152]: I0213 19:57:11.074965 2152 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:57:11.075282 kubelet[2152]: I0213 19:57:11.075008 2152 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:57:11.075282 kubelet[2152]: I0213 19:57:11.075226 2152 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:57:11.119406 kubelet[2152]: E0213 19:57:11.119355 2152 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:57:11.120076 kubelet[2152]: I0213 19:57:11.119982 2152 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:57:11.128845 kubelet[2152]: I0213 19:57:11.128797 2152 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:57:11.129263 kubelet[2152]: I0213 19:57:11.129201 2152 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:57:11.129681 kubelet[2152]: I0213 19:57:11.129236 2152 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:57:11.129681 kubelet[2152]: I0213 19:57:11.129675 2152 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:57:11.129799 kubelet[2152]: I0213 19:57:11.129773 2152 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:57:11.129957 kubelet[2152]: I0213 19:57:11.129939 2152 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:57:11.134631 kubelet[2152]: I0213 19:57:11.134603 2152 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:57:11.134631 kubelet[2152]: I0213 19:57:11.134632 2152 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:57:11.134954 kubelet[2152]: I0213 19:57:11.134940 2152 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:57:11.135720 kubelet[2152]: I0213 19:57:11.135029 2152 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:57:11.135720 kubelet[2152]: W0213 19:57:11.135286 2152 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:57:11.135720 kubelet[2152]: E0213 19:57:11.135340 2152 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:57:11.135720 kubelet[2152]: W0213 19:57:11.135648 2152 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:57:11.135720 kubelet[2152]: E0213 19:57:11.135684 2152 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:57:11.135993 kubelet[2152]: I0213 19:57:11.135975 2152 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:57:11.136384 kubelet[2152]: I0213 19:57:11.136357 2152 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:57:11.136483 kubelet[2152]: W0213 19:57:11.136465 2152 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:57:11.137325 kubelet[2152]: I0213 19:57:11.137301 2152 server.go:1264] "Started kubelet" Feb 13 19:57:11.138072 kubelet[2152]: I0213 19:57:11.137965 2152 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:57:11.138072 kubelet[2152]: I0213 19:57:11.138044 2152 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:57:11.138384 kubelet[2152]: I0213 19:57:11.138350 2152 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:57:11.143495 kubelet[2152]: I0213 19:57:11.143221 2152 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:57:11.143699 kubelet[2152]: I0213 19:57:11.143677 2152 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:57:11.144709 kubelet[2152]: E0213 19:57:11.144358 2152 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dcc7a607e1b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:57:11.13728044 +0000 UTC m=+1.055946241,LastTimestamp:2025-02-13 19:57:11.13728044 +0000 UTC m=+1.055946241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:57:11.146195 kubelet[2152]: E0213 19:57:11.146063 2152 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:57:11.146309 kubelet[2152]: I0213 19:57:11.146295 2152 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:57:11.146389 kubelet[2152]: I0213 19:57:11.146377 2152 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:57:11.148099 kubelet[2152]: I0213 19:57:11.148066 2152 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:57:11.148422 kubelet[2152]: W0213 19:57:11.148366 2152 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:57:11.148422 kubelet[2152]: E0213 19:57:11.148419 2152 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:57:11.148500 kubelet[2152]: E0213 19:57:11.148446 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="200ms" Feb 13 19:57:11.149848 kubelet[2152]: I0213 19:57:11.149380 2152 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:57:11.150612 kubelet[2152]: I0213 19:57:11.150430 2152 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:57:11.150612 kubelet[2152]: I0213 19:57:11.150450 2152 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:57:11.151074 kubelet[2152]: E0213 19:57:11.150825 2152 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:57:11.160164 kubelet[2152]: I0213 19:57:11.160114 2152 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:57:11.161122 kubelet[2152]: I0213 19:57:11.161102 2152 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:57:11.161122 kubelet[2152]: I0213 19:57:11.161125 2152 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:57:11.161203 kubelet[2152]: I0213 19:57:11.161141 2152 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:57:11.161203 kubelet[2152]: E0213 19:57:11.161182 2152 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:57:11.163297 kubelet[2152]: W0213 19:57:11.163253 2152 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:57:11.163568 kubelet[2152]: E0213 19:57:11.163541 2152 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:57:11.163684 kubelet[2152]: I0213 19:57:11.163650 2152 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:57:11.163684 kubelet[2152]: I0213 19:57:11.163662 2152 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:57:11.163747 kubelet[2152]: I0213 19:57:11.163707 2152 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:57:11.165481 kubelet[2152]: I0213 19:57:11.165463 2152 policy_none.go:49] "None policy: Start" Feb 13 19:57:11.166114 kubelet[2152]: I0213 19:57:11.165991 2152 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:57:11.166114 kubelet[2152]: I0213 19:57:11.166057 2152 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:57:11.171784 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:57:11.186708 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:57:11.189539 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:57:11.197836 kubelet[2152]: I0213 19:57:11.197812 2152 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:57:11.198482 kubelet[2152]: I0213 19:57:11.198220 2152 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:57:11.198482 kubelet[2152]: I0213 19:57:11.198313 2152 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:57:11.199996 kubelet[2152]: E0213 19:57:11.199978 2152 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:57:11.247922 kubelet[2152]: I0213 19:57:11.247893 2152 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:57:11.248497 kubelet[2152]: E0213 19:57:11.248465 2152 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Feb 13 19:57:11.261717 kubelet[2152]: I0213 19:57:11.261677 2152 topology_manager.go:215] "Topology Admit Handler" podUID="6d11c729004b9d1886797a04e587067d" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:57:11.262732 kubelet[2152]: I0213 19:57:11.262686 2152 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:57:11.263876 kubelet[2152]: I0213 19:57:11.263673 2152 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:57:11.271179 systemd[1]: Created slice kubepods-burstable-pod6d11c729004b9d1886797a04e587067d.slice - libcontainer container kubepods-burstable-pod6d11c729004b9d1886797a04e587067d.slice. Feb 13 19:57:11.292984 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 19:57:11.306729 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 19:57:11.348390 kubelet[2152]: I0213 19:57:11.348269 2152 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:57:11.348390 kubelet[2152]: I0213 19:57:11.348310 2152 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:57:11.348390 kubelet[2152]: I0213 19:57:11.348333 2152 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:57:11.348390 kubelet[2152]: I0213 19:57:11.348351 2152 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:57:11.348390 kubelet[2152]: I0213 19:57:11.348366 2152 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d11c729004b9d1886797a04e587067d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6d11c729004b9d1886797a04e587067d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:57:11.348573 kubelet[2152]: I0213 19:57:11.348401 2152 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d11c729004b9d1886797a04e587067d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6d11c729004b9d1886797a04e587067d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:57:11.348573 kubelet[2152]: I0213 19:57:11.348417 2152 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d11c729004b9d1886797a04e587067d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6d11c729004b9d1886797a04e587067d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:57:11.348573 kubelet[2152]: I0213 19:57:11.348433 2152 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:57:11.348573 kubelet[2152]: I0213 19:57:11.348447 2152 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:57:11.349182 kubelet[2152]: E0213 19:57:11.348855 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="400ms" Feb 13 19:57:11.450564 kubelet[2152]: I0213 19:57:11.450531 2152 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:57:11.450865 kubelet[2152]: E0213 19:57:11.450837 2152 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Feb 13 19:57:11.591381 kubelet[2152]: E0213 19:57:11.591302 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:11.592042 containerd[1446]: time="2025-02-13T19:57:11.591954645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6d11c729004b9d1886797a04e587067d,Namespace:kube-system,Attempt:0,}" Feb 13 19:57:11.605519 kubelet[2152]: E0213 19:57:11.605391 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:11.606477 containerd[1446]: time="2025-02-13T19:57:11.606356754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 19:57:11.608844 kubelet[2152]: E0213 19:57:11.608648 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:11.608977 containerd[1446]: time="2025-02-13T19:57:11.608946000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 19:57:11.749461 kubelet[2152]: E0213 19:57:11.749422 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="800ms" Feb 13 19:57:11.852061 kubelet[2152]: I0213 19:57:11.851981 2152 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:57:11.852334 kubelet[2152]: E0213 19:57:11.852297 2152 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Feb 13 19:57:12.055159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3233770598.mount: Deactivated successfully. Feb 13 19:57:12.059193 containerd[1446]: time="2025-02-13T19:57:12.059144879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:57:12.061145 containerd[1446]: time="2025-02-13T19:57:12.061111449Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:57:12.061921 containerd[1446]: time="2025-02-13T19:57:12.061885360Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:57:12.062670 containerd[1446]: time="2025-02-13T19:57:12.062640488Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:57:12.063338 containerd[1446]: time="2025-02-13T19:57:12.063303578Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:57:12.063963 containerd[1446]: time="2025-02-13T19:57:12.063930340Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:57:12.064541 containerd[1446]: time="2025-02-13T19:57:12.064505029Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:57:12.068822 containerd[1446]: time="2025-02-13T19:57:12.068777586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:57:12.069661 containerd[1446]: time="2025-02-13T19:57:12.069631186Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 460.630281ms" Feb 13 19:57:12.070887 containerd[1446]: time="2025-02-13T19:57:12.070851101Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 478.71544ms" Feb 13 19:57:12.072064 containerd[1446]: time="2025-02-13T19:57:12.072035806Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 465.593457ms" Feb 13 19:57:12.093607 kubelet[2152]: W0213 19:57:12.093557 2152 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:57:12.093696 kubelet[2152]: E0213 19:57:12.093619 2152 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:57:12.195470 kubelet[2152]: W0213 19:57:12.195399 2152 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:57:12.195470 kubelet[2152]: E0213 19:57:12.195466 2152 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:57:12.213174 containerd[1446]: time="2025-02-13T19:57:12.212982647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:57:12.213174 containerd[1446]: time="2025-02-13T19:57:12.213095626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:57:12.213174 containerd[1446]: time="2025-02-13T19:57:12.213107975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:57:12.213174 containerd[1446]: time="2025-02-13T19:57:12.212881817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:57:12.213174 containerd[1446]: time="2025-02-13T19:57:12.212948078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:57:12.213174 containerd[1446]: time="2025-02-13T19:57:12.212964743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:57:12.213862 containerd[1446]: time="2025-02-13T19:57:12.213666079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:57:12.213919 containerd[1446]: time="2025-02-13T19:57:12.213851514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:57:12.214531 containerd[1446]: time="2025-02-13T19:57:12.214384999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:57:12.214681 containerd[1446]: time="2025-02-13T19:57:12.214390634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:57:12.214681 containerd[1446]: time="2025-02-13T19:57:12.214411216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:57:12.214681 containerd[1446]: time="2025-02-13T19:57:12.214553489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:57:12.245204 systemd[1]: Started cri-containerd-05897be2e1116aabcf5791c4749bda792eec1920f806d76346534d7dbf3f4d7c.scope - libcontainer container 05897be2e1116aabcf5791c4749bda792eec1920f806d76346534d7dbf3f4d7c. Feb 13 19:57:12.246557 systemd[1]: Started cri-containerd-471a303034da0ebf7bc970902b8dfb662d1642fba307d4130165ab75fde7d42c.scope - libcontainer container 471a303034da0ebf7bc970902b8dfb662d1642fba307d4130165ab75fde7d42c. Feb 13 19:57:12.247660 systemd[1]: Started cri-containerd-a285d85539020309d89623c35bbe8d4fd658ff4716166d53fa0370d3dd2ff69a.scope - libcontainer container a285d85539020309d89623c35bbe8d4fd658ff4716166d53fa0370d3dd2ff69a. Feb 13 19:57:12.275775 containerd[1446]: time="2025-02-13T19:57:12.275538454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6d11c729004b9d1886797a04e587067d,Namespace:kube-system,Attempt:0,} returns sandbox id \"05897be2e1116aabcf5791c4749bda792eec1920f806d76346534d7dbf3f4d7c\"" Feb 13 19:57:12.277601 kubelet[2152]: E0213 19:57:12.277466 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:12.280522 containerd[1446]: time="2025-02-13T19:57:12.280424425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"a285d85539020309d89623c35bbe8d4fd658ff4716166d53fa0370d3dd2ff69a\"" Feb 13 19:57:12.280522 containerd[1446]: time="2025-02-13T19:57:12.280486850Z" level=info msg="CreateContainer within sandbox \"05897be2e1116aabcf5791c4749bda792eec1920f806d76346534d7dbf3f4d7c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:57:12.281609 kubelet[2152]: E0213 19:57:12.281478 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:12.283725 containerd[1446]: time="2025-02-13T19:57:12.283646598Z" level=info msg="CreateContainer within sandbox \"a285d85539020309d89623c35bbe8d4fd658ff4716166d53fa0370d3dd2ff69a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:57:12.285738 containerd[1446]: time="2025-02-13T19:57:12.285710201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"471a303034da0ebf7bc970902b8dfb662d1642fba307d4130165ab75fde7d42c\"" Feb 13 19:57:12.286329 kubelet[2152]: E0213 19:57:12.286310 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:12.288633 containerd[1446]: time="2025-02-13T19:57:12.288588639Z" level=info msg="CreateContainer within sandbox \"471a303034da0ebf7bc970902b8dfb662d1642fba307d4130165ab75fde7d42c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:57:12.300108 containerd[1446]: time="2025-02-13T19:57:12.300062068Z" level=info msg="CreateContainer within sandbox \"a285d85539020309d89623c35bbe8d4fd658ff4716166d53fa0370d3dd2ff69a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c272c7be9bd72f5845976d79fe0f70a001fc0742389ae4d517ca586093c65126\"" Feb 13 19:57:12.300638 containerd[1446]: time="2025-02-13T19:57:12.300606304Z" level=info msg="StartContainer for \"c272c7be9bd72f5845976d79fe0f70a001fc0742389ae4d517ca586093c65126\"" Feb 13 19:57:12.303541 containerd[1446]: time="2025-02-13T19:57:12.303301705Z" level=info msg="CreateContainer within sandbox \"05897be2e1116aabcf5791c4749bda792eec1920f806d76346534d7dbf3f4d7c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e0605117c64eac430c4e1b8f8490257975346a0eba8a204fdaaee33ab5f26f42\"" Feb 13 19:57:12.303768 containerd[1446]: time="2025-02-13T19:57:12.303722611Z" level=info msg="StartContainer for \"e0605117c64eac430c4e1b8f8490257975346a0eba8a204fdaaee33ab5f26f42\"" Feb 13 19:57:12.311224 containerd[1446]: time="2025-02-13T19:57:12.310172470Z" level=info msg="CreateContainer within sandbox \"471a303034da0ebf7bc970902b8dfb662d1642fba307d4130165ab75fde7d42c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"af5fc4ef56ad449c3773c91f3c43e2d788c5318bab6f339d7c548a8d2243b09f\"" Feb 13 19:57:12.311224 containerd[1446]: time="2025-02-13T19:57:12.310581946Z" level=info msg="StartContainer for \"af5fc4ef56ad449c3773c91f3c43e2d788c5318bab6f339d7c548a8d2243b09f\"" Feb 13 19:57:12.325165 systemd[1]: Started cri-containerd-c272c7be9bd72f5845976d79fe0f70a001fc0742389ae4d517ca586093c65126.scope - libcontainer container c272c7be9bd72f5845976d79fe0f70a001fc0742389ae4d517ca586093c65126. Feb 13 19:57:12.328082 systemd[1]: Started cri-containerd-e0605117c64eac430c4e1b8f8490257975346a0eba8a204fdaaee33ab5f26f42.scope - libcontainer container e0605117c64eac430c4e1b8f8490257975346a0eba8a204fdaaee33ab5f26f42. Feb 13 19:57:12.332419 systemd[1]: Started cri-containerd-af5fc4ef56ad449c3773c91f3c43e2d788c5318bab6f339d7c548a8d2243b09f.scope - libcontainer container af5fc4ef56ad449c3773c91f3c43e2d788c5318bab6f339d7c548a8d2243b09f. Feb 13 19:57:12.360407 containerd[1446]: time="2025-02-13T19:57:12.360353810Z" level=info msg="StartContainer for \"c272c7be9bd72f5845976d79fe0f70a001fc0742389ae4d517ca586093c65126\" returns successfully" Feb 13 19:57:12.385014 containerd[1446]: time="2025-02-13T19:57:12.384162741Z" level=info msg="StartContainer for \"e0605117c64eac430c4e1b8f8490257975346a0eba8a204fdaaee33ab5f26f42\" returns successfully" Feb 13 19:57:12.385014 containerd[1446]: time="2025-02-13T19:57:12.384215094Z" level=info msg="StartContainer for \"af5fc4ef56ad449c3773c91f3c43e2d788c5318bab6f339d7c548a8d2243b09f\" returns successfully" Feb 13 19:57:12.428099 kubelet[2152]: W0213 19:57:12.428046 2152 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:57:12.428172 kubelet[2152]: E0213 19:57:12.428108 2152 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:57:12.461724 kubelet[2152]: W0213 19:57:12.461628 2152 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:57:12.461724 kubelet[2152]: E0213 19:57:12.461688 2152 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Feb 13 19:57:12.550457 kubelet[2152]: E0213 19:57:12.550411 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="1.6s" Feb 13 19:57:12.653632 kubelet[2152]: I0213 19:57:12.653360 2152 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:57:13.168511 kubelet[2152]: E0213 19:57:13.168484 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:13.170046 kubelet[2152]: E0213 19:57:13.169680 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:13.170929 kubelet[2152]: E0213 19:57:13.170907 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:14.112108 kubelet[2152]: I0213 19:57:14.112062 2152 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:57:14.137636 kubelet[2152]: I0213 19:57:14.137567 2152 apiserver.go:52] "Watching apiserver" Feb 13 19:57:14.146831 kubelet[2152]: I0213 19:57:14.146805 2152 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:57:14.186817 kubelet[2152]: E0213 19:57:14.185291 2152 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 19:57:14.186817 kubelet[2152]: E0213 19:57:14.185746 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:16.038090 systemd[1]: Reloading requested from client PID 2434 ('systemctl') (unit session-5.scope)... Feb 13 19:57:16.038107 systemd[1]: Reloading... Feb 13 19:57:16.104056 zram_generator::config[2476]: No configuration found. Feb 13 19:57:16.192154 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:57:16.269003 systemd[1]: Reloading finished in 230 ms. Feb 13 19:57:16.300718 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:57:16.314248 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:57:16.314437 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:57:16.314481 systemd[1]: kubelet.service: Consumed 1.384s CPU time, 115.7M memory peak, 0B memory swap peak. Feb 13 19:57:16.329271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:57:16.419075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:57:16.422693 (kubelet)[2515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:57:16.463715 kubelet[2515]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:57:16.463715 kubelet[2515]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:57:16.463715 kubelet[2515]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:57:16.463715 kubelet[2515]: I0213 19:57:16.463329 2515 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:57:16.467942 kubelet[2515]: I0213 19:57:16.467892 2515 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:57:16.467942 kubelet[2515]: I0213 19:57:16.467916 2515 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:57:16.468095 kubelet[2515]: I0213 19:57:16.468071 2515 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:57:16.469437 kubelet[2515]: I0213 19:57:16.469413 2515 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:57:16.470583 kubelet[2515]: I0213 19:57:16.470562 2515 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:57:16.475450 kubelet[2515]: I0213 19:57:16.475417 2515 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:57:16.476047 kubelet[2515]: I0213 19:57:16.475721 2515 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:57:16.476047 kubelet[2515]: I0213 19:57:16.475749 2515 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:57:16.476047 kubelet[2515]: I0213 19:57:16.475892 2515 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:57:16.476047 kubelet[2515]: I0213 19:57:16.475900 2515 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:57:16.476047 kubelet[2515]: I0213 19:57:16.475935 2515 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:57:16.476299 kubelet[2515]: I0213 19:57:16.476282 2515 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:57:16.476357 kubelet[2515]: I0213 19:57:16.476348 2515 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:57:16.476439 kubelet[2515]: I0213 19:57:16.476429 2515 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:57:16.476503 kubelet[2515]: I0213 19:57:16.476493 2515 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:57:16.477863 kubelet[2515]: I0213 19:57:16.477839 2515 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:57:16.477999 kubelet[2515]: I0213 19:57:16.477984 2515 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:57:16.479132 kubelet[2515]: I0213 19:57:16.478355 2515 server.go:1264] "Started kubelet" Feb 13 19:57:16.479635 kubelet[2515]: I0213 19:57:16.479610 2515 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:57:16.480956 kubelet[2515]: I0213 19:57:16.480928 2515 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:57:16.481381 kubelet[2515]: I0213 19:57:16.481359 2515 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:57:16.481587 kubelet[2515]: I0213 19:57:16.481573 2515 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:57:16.483496 kubelet[2515]: I0213 19:57:16.483458 2515 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:57:16.484605 kubelet[2515]: I0213 19:57:16.484574 2515 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:57:16.487285 kubelet[2515]: I0213 19:57:16.486768 2515 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:57:16.487285 kubelet[2515]: I0213 19:57:16.487039 2515 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:57:16.491479 kubelet[2515]: I0213 19:57:16.491454 2515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:57:16.492783 kubelet[2515]: I0213 19:57:16.492764 2515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:57:16.492886 kubelet[2515]: I0213 19:57:16.492875 2515 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:57:16.492954 kubelet[2515]: I0213 19:57:16.492945 2515 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:57:16.493128 kubelet[2515]: E0213 19:57:16.493101 2515 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:57:16.494606 kubelet[2515]: I0213 19:57:16.494578 2515 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:57:16.494606 kubelet[2515]: I0213 19:57:16.494597 2515 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:57:16.495284 kubelet[2515]: I0213 19:57:16.495243 2515 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:57:16.501701 kubelet[2515]: E0213 19:57:16.501664 2515 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:57:16.531567 kubelet[2515]: I0213 19:57:16.531529 2515 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:57:16.531567 kubelet[2515]: I0213 19:57:16.531548 2515 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:57:16.531567 kubelet[2515]: I0213 19:57:16.531567 2515 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:57:16.531738 kubelet[2515]: I0213 19:57:16.531718 2515 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:57:16.531773 kubelet[2515]: I0213 19:57:16.531735 2515 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:57:16.531773 kubelet[2515]: I0213 19:57:16.531753 2515 policy_none.go:49] "None policy: Start" Feb 13 19:57:16.532496 kubelet[2515]: I0213 19:57:16.532464 2515 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:57:16.532496 kubelet[2515]: I0213 19:57:16.532492 2515 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:57:16.532625 kubelet[2515]: I0213 19:57:16.532610 2515 state_mem.go:75] "Updated machine memory state" Feb 13 19:57:16.538174 kubelet[2515]: I0213 19:57:16.538146 2515 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:57:16.538546 kubelet[2515]: I0213 19:57:16.538381 2515 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:57:16.538546 kubelet[2515]: I0213 19:57:16.538473 2515 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:57:16.585447 kubelet[2515]: I0213 19:57:16.585358 2515 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:57:16.591040 kubelet[2515]: I0213 19:57:16.590948 2515 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 19:57:16.591040 kubelet[2515]: I0213 19:57:16.591032 2515 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:57:16.593863 kubelet[2515]: I0213 19:57:16.593835 2515 topology_manager.go:215] "Topology Admit Handler" podUID="6d11c729004b9d1886797a04e587067d" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:57:16.593946 kubelet[2515]: I0213 19:57:16.593930 2515 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:57:16.593977 kubelet[2515]: I0213 19:57:16.593970 2515 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:57:16.682153 kubelet[2515]: I0213 19:57:16.682109 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d11c729004b9d1886797a04e587067d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6d11c729004b9d1886797a04e587067d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:57:16.682153 kubelet[2515]: I0213 19:57:16.682147 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d11c729004b9d1886797a04e587067d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6d11c729004b9d1886797a04e587067d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:57:16.682325 kubelet[2515]: I0213 19:57:16.682166 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d11c729004b9d1886797a04e587067d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6d11c729004b9d1886797a04e587067d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:57:16.682325 kubelet[2515]: I0213 19:57:16.682188 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:57:16.682325 kubelet[2515]: I0213 19:57:16.682202 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:57:16.682325 kubelet[2515]: I0213 19:57:16.682226 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:57:16.682325 kubelet[2515]: I0213 19:57:16.682240 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:57:16.682444 kubelet[2515]: I0213 19:57:16.682255 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:57:16.682444 kubelet[2515]: I0213 19:57:16.682271 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:57:16.900629 kubelet[2515]: E0213 19:57:16.899833 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:16.900629 kubelet[2515]: E0213 19:57:16.900282 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:16.900737 kubelet[2515]: E0213 19:57:16.900683 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:17.477299 kubelet[2515]: I0213 19:57:17.477256 2515 apiserver.go:52] "Watching apiserver" Feb 13 19:57:17.482131 kubelet[2515]: I0213 19:57:17.482079 2515 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:57:17.513307 kubelet[2515]: E0213 19:57:17.513015 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:17.513307 kubelet[2515]: E0213 19:57:17.513126 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:17.514332 kubelet[2515]: E0213 19:57:17.514273 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:17.529439 kubelet[2515]: I0213 19:57:17.529384 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.529370165 podStartE2EDuration="1.529370165s" podCreationTimestamp="2025-02-13 19:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:57:17.529095725 +0000 UTC m=+1.103260041" watchObservedRunningTime="2025-02-13 19:57:17.529370165 +0000 UTC m=+1.103534481" Feb 13 19:57:17.536225 kubelet[2515]: I0213 19:57:17.536129 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.53611769 podStartE2EDuration="1.53611769s" podCreationTimestamp="2025-02-13 19:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:57:17.536088442 +0000 UTC m=+1.110252758" watchObservedRunningTime="2025-02-13 19:57:17.53611769 +0000 UTC m=+1.110282006" Feb 13 19:57:17.542893 kubelet[2515]: I0213 19:57:17.542612 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5426016580000002 podStartE2EDuration="1.542601658s" podCreationTimestamp="2025-02-13 19:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:57:17.542599858 +0000 UTC m=+1.116764174" watchObservedRunningTime="2025-02-13 19:57:17.542601658 +0000 UTC m=+1.116765974" Feb 13 19:57:17.880762 sudo[1577]: pam_unix(sudo:session): session closed for user root Feb 13 19:57:17.882271 sshd[1574]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:17.884736 systemd[1]: sshd@4-10.0.0.134:22-10.0.0.1:60594.service: Deactivated successfully. Feb 13 19:57:17.886338 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:57:17.886509 systemd[1]: session-5.scope: Consumed 5.444s CPU time, 190.2M memory peak, 0B memory swap peak. Feb 13 19:57:17.887584 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:57:17.889395 systemd-logind[1425]: Removed session 5. Feb 13 19:57:18.514775 kubelet[2515]: E0213 19:57:18.514675 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:18.515614 kubelet[2515]: E0213 19:57:18.515566 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:20.556699 kubelet[2515]: E0213 19:57:20.556603 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:25.763061 kubelet[2515]: E0213 19:57:25.762880 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:26.526880 kubelet[2515]: E0213 19:57:26.526818 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:27.726410 kubelet[2515]: E0213 19:57:27.726371 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:28.103903 update_engine[1428]: I20250213 19:57:28.103253 1428 update_attempter.cc:509] Updating boot flags... Feb 13 19:57:28.125058 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2591) Feb 13 19:57:28.152226 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2592) Feb 13 19:57:28.171059 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2592) Feb 13 19:57:30.564761 kubelet[2515]: E0213 19:57:30.564712 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:31.532292 kubelet[2515]: E0213 19:57:31.532261 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:31.808835 kubelet[2515]: I0213 19:57:31.808736 2515 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:57:31.809188 containerd[1446]: time="2025-02-13T19:57:31.809139025Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:57:31.809574 kubelet[2515]: I0213 19:57:31.809336 2515 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:57:32.433111 kubelet[2515]: I0213 19:57:32.432813 2515 topology_manager.go:215] "Topology Admit Handler" podUID="b648e46f-00e3-48e2-b66b-8b225a71ad3d" podNamespace="kube-system" podName="kube-proxy-xgfcx" Feb 13 19:57:32.439034 kubelet[2515]: I0213 19:57:32.437502 2515 topology_manager.go:215] "Topology Admit Handler" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" podNamespace="kube-flannel" podName="kube-flannel-ds-42jkn" Feb 13 19:57:32.447767 systemd[1]: Created slice kubepods-besteffort-podb648e46f_00e3_48e2_b66b_8b225a71ad3d.slice - libcontainer container kubepods-besteffort-podb648e46f_00e3_48e2_b66b_8b225a71ad3d.slice. Feb 13 19:57:32.458929 systemd[1]: Created slice kubepods-burstable-pod25643ab7_6101_403b_ac20_77f1fa4c78ee.slice - libcontainer container kubepods-burstable-pod25643ab7_6101_403b_ac20_77f1fa4c78ee.slice. Feb 13 19:57:32.485059 kubelet[2515]: I0213 19:57:32.484956 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b648e46f-00e3-48e2-b66b-8b225a71ad3d-kube-proxy\") pod \"kube-proxy-xgfcx\" (UID: \"b648e46f-00e3-48e2-b66b-8b225a71ad3d\") " pod="kube-system/kube-proxy-xgfcx" Feb 13 19:57:32.485059 kubelet[2515]: I0213 19:57:32.484994 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtmnx\" (UniqueName: \"kubernetes.io/projected/b648e46f-00e3-48e2-b66b-8b225a71ad3d-kube-api-access-xtmnx\") pod \"kube-proxy-xgfcx\" (UID: \"b648e46f-00e3-48e2-b66b-8b225a71ad3d\") " pod="kube-system/kube-proxy-xgfcx" Feb 13 19:57:32.485059 kubelet[2515]: I0213 19:57:32.485012 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/25643ab7-6101-403b-ac20-77f1fa4c78ee-flannel-cfg\") pod \"kube-flannel-ds-42jkn\" (UID: \"25643ab7-6101-403b-ac20-77f1fa4c78ee\") " pod="kube-flannel/kube-flannel-ds-42jkn" Feb 13 19:57:32.485059 kubelet[2515]: I0213 19:57:32.485039 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdsq8\" (UniqueName: \"kubernetes.io/projected/25643ab7-6101-403b-ac20-77f1fa4c78ee-kube-api-access-jdsq8\") pod \"kube-flannel-ds-42jkn\" (UID: \"25643ab7-6101-403b-ac20-77f1fa4c78ee\") " pod="kube-flannel/kube-flannel-ds-42jkn" Feb 13 19:57:32.485059 kubelet[2515]: I0213 19:57:32.485056 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b648e46f-00e3-48e2-b66b-8b225a71ad3d-xtables-lock\") pod \"kube-proxy-xgfcx\" (UID: \"b648e46f-00e3-48e2-b66b-8b225a71ad3d\") " pod="kube-system/kube-proxy-xgfcx" Feb 13 19:57:32.485234 kubelet[2515]: I0213 19:57:32.485070 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/25643ab7-6101-403b-ac20-77f1fa4c78ee-run\") pod \"kube-flannel-ds-42jkn\" (UID: \"25643ab7-6101-403b-ac20-77f1fa4c78ee\") " pod="kube-flannel/kube-flannel-ds-42jkn" Feb 13 19:57:32.485234 kubelet[2515]: I0213 19:57:32.485085 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/25643ab7-6101-403b-ac20-77f1fa4c78ee-cni\") pod \"kube-flannel-ds-42jkn\" (UID: \"25643ab7-6101-403b-ac20-77f1fa4c78ee\") " pod="kube-flannel/kube-flannel-ds-42jkn" Feb 13 19:57:32.485234 kubelet[2515]: I0213 19:57:32.485100 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25643ab7-6101-403b-ac20-77f1fa4c78ee-xtables-lock\") pod \"kube-flannel-ds-42jkn\" (UID: \"25643ab7-6101-403b-ac20-77f1fa4c78ee\") " pod="kube-flannel/kube-flannel-ds-42jkn" Feb 13 19:57:32.485234 kubelet[2515]: I0213 19:57:32.485115 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b648e46f-00e3-48e2-b66b-8b225a71ad3d-lib-modules\") pod \"kube-proxy-xgfcx\" (UID: \"b648e46f-00e3-48e2-b66b-8b225a71ad3d\") " pod="kube-system/kube-proxy-xgfcx" Feb 13 19:57:32.485234 kubelet[2515]: I0213 19:57:32.485134 2515 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/25643ab7-6101-403b-ac20-77f1fa4c78ee-cni-plugin\") pod \"kube-flannel-ds-42jkn\" (UID: \"25643ab7-6101-403b-ac20-77f1fa4c78ee\") " pod="kube-flannel/kube-flannel-ds-42jkn" Feb 13 19:57:32.756359 kubelet[2515]: E0213 19:57:32.756329 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:32.757104 containerd[1446]: time="2025-02-13T19:57:32.756811770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xgfcx,Uid:b648e46f-00e3-48e2-b66b-8b225a71ad3d,Namespace:kube-system,Attempt:0,}" Feb 13 19:57:32.761958 kubelet[2515]: E0213 19:57:32.761745 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:32.762511 containerd[1446]: time="2025-02-13T19:57:32.762299860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-42jkn,Uid:25643ab7-6101-403b-ac20-77f1fa4c78ee,Namespace:kube-flannel,Attempt:0,}" Feb 13 19:57:32.777337 containerd[1446]: time="2025-02-13T19:57:32.777126514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:57:32.777337 containerd[1446]: time="2025-02-13T19:57:32.777171040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:57:32.777337 containerd[1446]: time="2025-02-13T19:57:32.777190082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:57:32.777337 containerd[1446]: time="2025-02-13T19:57:32.777265372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:57:32.790359 containerd[1446]: time="2025-02-13T19:57:32.790064036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:57:32.790359 containerd[1446]: time="2025-02-13T19:57:32.790146807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:57:32.790359 containerd[1446]: time="2025-02-13T19:57:32.790177451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:57:32.790511 containerd[1446]: time="2025-02-13T19:57:32.790347394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:57:32.794177 systemd[1]: Started cri-containerd-dee3689788a2517ffe358df8eefeecc7fc0612f8eba4cb2c6029eb80918d6e6e.scope - libcontainer container dee3689788a2517ffe358df8eefeecc7fc0612f8eba4cb2c6029eb80918d6e6e. Feb 13 19:57:32.813173 systemd[1]: Started cri-containerd-b622c636f86ea0f6b16e654b1476cd1559ddb092a43d2800d1315fcb7882656e.scope - libcontainer container b622c636f86ea0f6b16e654b1476cd1559ddb092a43d2800d1315fcb7882656e. Feb 13 19:57:32.813592 containerd[1446]: time="2025-02-13T19:57:32.813555243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xgfcx,Uid:b648e46f-00e3-48e2-b66b-8b225a71ad3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"dee3689788a2517ffe358df8eefeecc7fc0612f8eba4cb2c6029eb80918d6e6e\"" Feb 13 19:57:32.815087 kubelet[2515]: E0213 19:57:32.814780 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:32.818004 containerd[1446]: time="2025-02-13T19:57:32.817963590Z" level=info msg="CreateContainer within sandbox \"dee3689788a2517ffe358df8eefeecc7fc0612f8eba4cb2c6029eb80918d6e6e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:57:32.831991 containerd[1446]: time="2025-02-13T19:57:32.831597925Z" level=info msg="CreateContainer within sandbox \"dee3689788a2517ffe358df8eefeecc7fc0612f8eba4cb2c6029eb80918d6e6e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"188842dda9cdc80753c0fe5444c82fa2934f629713058219e6e0f174b426c3d2\"" Feb 13 19:57:32.832167 containerd[1446]: time="2025-02-13T19:57:32.832112393Z" level=info msg="StartContainer for \"188842dda9cdc80753c0fe5444c82fa2934f629713058219e6e0f174b426c3d2\"" Feb 13 19:57:32.842092 containerd[1446]: time="2025-02-13T19:57:32.842010511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-42jkn,Uid:25643ab7-6101-403b-ac20-77f1fa4c78ee,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"b622c636f86ea0f6b16e654b1476cd1559ddb092a43d2800d1315fcb7882656e\"" Feb 13 19:57:32.843110 kubelet[2515]: E0213 19:57:32.842783 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:32.844100 containerd[1446]: time="2025-02-13T19:57:32.843919525Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 19:57:32.857196 systemd[1]: Started cri-containerd-188842dda9cdc80753c0fe5444c82fa2934f629713058219e6e0f174b426c3d2.scope - libcontainer container 188842dda9cdc80753c0fe5444c82fa2934f629713058219e6e0f174b426c3d2. Feb 13 19:57:32.879252 containerd[1446]: time="2025-02-13T19:57:32.879176298Z" level=info msg="StartContainer for \"188842dda9cdc80753c0fe5444c82fa2934f629713058219e6e0f174b426c3d2\" returns successfully" Feb 13 19:57:33.536126 kubelet[2515]: E0213 19:57:33.536099 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:33.543311 kubelet[2515]: I0213 19:57:33.543203 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xgfcx" podStartSLOduration=1.5431858040000002 podStartE2EDuration="1.543185804s" podCreationTimestamp="2025-02-13 19:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:57:33.542800435 +0000 UTC m=+17.116964751" watchObservedRunningTime="2025-02-13 19:57:33.543185804 +0000 UTC m=+17.117350200" Feb 13 19:57:34.278247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3674357661.mount: Deactivated successfully. Feb 13 19:57:34.304521 containerd[1446]: time="2025-02-13T19:57:34.304477702Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:34.305374 containerd[1446]: time="2025-02-13T19:57:34.305215832Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 19:57:34.306252 containerd[1446]: time="2025-02-13T19:57:34.306219394Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:34.308356 containerd[1446]: time="2025-02-13T19:57:34.308299486Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:57:34.309236 containerd[1446]: time="2025-02-13T19:57:34.309157430Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.465206301s" Feb 13 19:57:34.309236 containerd[1446]: time="2025-02-13T19:57:34.309191514Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 19:57:34.311114 containerd[1446]: time="2025-02-13T19:57:34.311085024Z" level=info msg="CreateContainer within sandbox \"b622c636f86ea0f6b16e654b1476cd1559ddb092a43d2800d1315fcb7882656e\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 19:57:34.320983 containerd[1446]: time="2025-02-13T19:57:34.320939340Z" level=info msg="CreateContainer within sandbox \"b622c636f86ea0f6b16e654b1476cd1559ddb092a43d2800d1315fcb7882656e\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"2f2b35095be6714fd359f5463fc9e43c34994e6838aea8fe55227ca58dee616e\"" Feb 13 19:57:34.321299 containerd[1446]: time="2025-02-13T19:57:34.321276141Z" level=info msg="StartContainer for \"2f2b35095be6714fd359f5463fc9e43c34994e6838aea8fe55227ca58dee616e\"" Feb 13 19:57:34.344170 systemd[1]: Started cri-containerd-2f2b35095be6714fd359f5463fc9e43c34994e6838aea8fe55227ca58dee616e.scope - libcontainer container 2f2b35095be6714fd359f5463fc9e43c34994e6838aea8fe55227ca58dee616e. Feb 13 19:57:34.362697 containerd[1446]: time="2025-02-13T19:57:34.362049091Z" level=info msg="StartContainer for \"2f2b35095be6714fd359f5463fc9e43c34994e6838aea8fe55227ca58dee616e\" returns successfully" Feb 13 19:57:34.366764 systemd[1]: cri-containerd-2f2b35095be6714fd359f5463fc9e43c34994e6838aea8fe55227ca58dee616e.scope: Deactivated successfully. Feb 13 19:57:34.411951 containerd[1446]: time="2025-02-13T19:57:34.411855256Z" level=info msg="shim disconnected" id=2f2b35095be6714fd359f5463fc9e43c34994e6838aea8fe55227ca58dee616e namespace=k8s.io Feb 13 19:57:34.411951 containerd[1446]: time="2025-02-13T19:57:34.411930825Z" level=warning msg="cleaning up after shim disconnected" id=2f2b35095be6714fd359f5463fc9e43c34994e6838aea8fe55227ca58dee616e namespace=k8s.io Feb 13 19:57:34.411951 containerd[1446]: time="2025-02-13T19:57:34.411951908Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:57:34.538052 kubelet[2515]: E0213 19:57:34.537823 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:34.538772 containerd[1446]: time="2025-02-13T19:57:34.538606522Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 19:57:34.596250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f2b35095be6714fd359f5463fc9e43c34994e6838aea8fe55227ca58dee616e-rootfs.mount: Deactivated successfully. Feb 13 19:57:35.643471 containerd[1446]: time="2025-02-13T19:57:35.643420784Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 19:57:35.644238 containerd[1446]: time="2025-02-13T19:57:35.643494753Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=11052" Feb 13 19:57:35.644272 kubelet[2515]: E0213 19:57:35.643599 2515 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 19:57:35.644272 kubelet[2515]: E0213 19:57:35.643640 2515 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 19:57:35.644494 kubelet[2515]: E0213 19:57:35.643812 2515 kuberuntime_manager.go:1256] init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdsq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-42jkn_kube-flannel(25643ab7-6101-403b-ac20-77f1fa4c78ee): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel:v0.22.0": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 19:57:35.644546 kubelet[2515]: E0213 19:57:35.643850 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 19:57:36.540901 kubelet[2515]: E0213 19:57:36.540852 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:36.542705 kubelet[2515]: E0213 19:57:36.541278 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 19:57:41.663264 systemd[1]: Started sshd@5-10.0.0.134:22-10.0.0.1:36730.service - OpenSSH per-connection server daemon (10.0.0.1:36730). Feb 13 19:57:41.711093 sshd[2903]: Accepted publickey for core from 10.0.0.1 port 36730 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:41.712680 sshd[2903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:41.716297 systemd-logind[1425]: New session 6 of user core. Feb 13 19:57:41.728180 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:57:41.856758 sshd[2903]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:41.859742 systemd[1]: sshd@5-10.0.0.134:22-10.0.0.1:36730.service: Deactivated successfully. Feb 13 19:57:41.861780 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:57:41.863319 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:57:41.864431 systemd-logind[1425]: Removed session 6. Feb 13 19:57:46.867577 systemd[1]: Started sshd@6-10.0.0.134:22-10.0.0.1:53088.service - OpenSSH per-connection server daemon (10.0.0.1:53088). Feb 13 19:57:46.907295 sshd[2919]: Accepted publickey for core from 10.0.0.1 port 53088 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:46.908496 sshd[2919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:46.912163 systemd-logind[1425]: New session 7 of user core. Feb 13 19:57:46.922167 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:57:47.026708 sshd[2919]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:47.029880 systemd[1]: sshd@6-10.0.0.134:22-10.0.0.1:53088.service: Deactivated successfully. Feb 13 19:57:47.031664 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:57:47.032327 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:57:47.033117 systemd-logind[1425]: Removed session 7. Feb 13 19:57:49.493544 kubelet[2515]: E0213 19:57:49.493500 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:57:49.494362 containerd[1446]: time="2025-02-13T19:57:49.494325615Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 19:57:50.610100 containerd[1446]: time="2025-02-13T19:57:50.610037682Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 19:57:50.610449 containerd[1446]: time="2025-02-13T19:57:50.610123008Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=11054" Feb 13 19:57:50.610486 kubelet[2515]: E0213 19:57:50.610213 2515 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 19:57:50.610486 kubelet[2515]: E0213 19:57:50.610250 2515 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 19:57:50.610701 kubelet[2515]: E0213 19:57:50.610324 2515 kuberuntime_manager.go:1256] init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdsq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-42jkn_kube-flannel(25643ab7-6101-403b-ac20-77f1fa4c78ee): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel:v0.22.0": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 19:57:50.610754 kubelet[2515]: E0213 19:57:50.610350 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 19:57:52.037357 systemd[1]: Started sshd@7-10.0.0.134:22-10.0.0.1:53094.service - OpenSSH per-connection server daemon (10.0.0.1:53094). Feb 13 19:57:52.076586 sshd[2934]: Accepted publickey for core from 10.0.0.1 port 53094 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:52.077850 sshd[2934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:52.081124 systemd-logind[1425]: New session 8 of user core. Feb 13 19:57:52.091162 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:57:52.194486 sshd[2934]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:52.197465 systemd[1]: sshd@7-10.0.0.134:22-10.0.0.1:53094.service: Deactivated successfully. Feb 13 19:57:52.199201 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:57:52.199715 systemd-logind[1425]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:57:52.200441 systemd-logind[1425]: Removed session 8. Feb 13 19:57:57.204562 systemd[1]: Started sshd@8-10.0.0.134:22-10.0.0.1:40116.service - OpenSSH per-connection server daemon (10.0.0.1:40116). Feb 13 19:57:57.244588 sshd[2950]: Accepted publickey for core from 10.0.0.1 port 40116 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:57:57.245767 sshd[2950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:57:57.249520 systemd-logind[1425]: New session 9 of user core. Feb 13 19:57:57.257170 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:57:57.360090 sshd[2950]: pam_unix(sshd:session): session closed for user core Feb 13 19:57:57.363266 systemd[1]: sshd@8-10.0.0.134:22-10.0.0.1:40116.service: Deactivated successfully. Feb 13 19:57:57.364995 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:57:57.365603 systemd-logind[1425]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:57:57.366377 systemd-logind[1425]: Removed session 9. Feb 13 19:58:01.493679 kubelet[2515]: E0213 19:58:01.493614 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:01.494524 kubelet[2515]: E0213 19:58:01.494489 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 19:58:02.370804 systemd[1]: Started sshd@9-10.0.0.134:22-10.0.0.1:40128.service - OpenSSH per-connection server daemon (10.0.0.1:40128). Feb 13 19:58:02.411517 sshd[2965]: Accepted publickey for core from 10.0.0.1 port 40128 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:02.412946 sshd[2965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:02.416544 systemd-logind[1425]: New session 10 of user core. Feb 13 19:58:02.429225 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:58:02.533263 sshd[2965]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:02.536423 systemd[1]: sshd@9-10.0.0.134:22-10.0.0.1:40128.service: Deactivated successfully. Feb 13 19:58:02.538201 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:58:02.538793 systemd-logind[1425]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:58:02.539559 systemd-logind[1425]: Removed session 10. Feb 13 19:58:07.544548 systemd[1]: Started sshd@10-10.0.0.134:22-10.0.0.1:38832.service - OpenSSH per-connection server daemon (10.0.0.1:38832). Feb 13 19:58:07.583311 sshd[2983]: Accepted publickey for core from 10.0.0.1 port 38832 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:07.584526 sshd[2983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:07.588277 systemd-logind[1425]: New session 11 of user core. Feb 13 19:58:07.599209 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:58:07.703103 sshd[2983]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:07.705932 systemd[1]: sshd@10-10.0.0.134:22-10.0.0.1:38832.service: Deactivated successfully. Feb 13 19:58:07.708192 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:58:07.709067 systemd-logind[1425]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:58:07.709884 systemd-logind[1425]: Removed session 11. Feb 13 19:58:12.493762 kubelet[2515]: E0213 19:58:12.493617 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:12.495584 containerd[1446]: time="2025-02-13T19:58:12.495297496Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 19:58:12.713668 systemd[1]: Started sshd@11-10.0.0.134:22-10.0.0.1:45300.service - OpenSSH per-connection server daemon (10.0.0.1:45300). Feb 13 19:58:12.754077 sshd[2998]: Accepted publickey for core from 10.0.0.1 port 45300 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:12.755168 sshd[2998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:12.763103 systemd-logind[1425]: New session 12 of user core. Feb 13 19:58:12.775274 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:58:12.882809 sshd[2998]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:12.886267 systemd[1]: sshd@11-10.0.0.134:22-10.0.0.1:45300.service: Deactivated successfully. Feb 13 19:58:12.887925 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:58:12.888622 systemd-logind[1425]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:58:12.889426 systemd-logind[1425]: Removed session 12. Feb 13 19:58:13.617968 containerd[1446]: time="2025-02-13T19:58:13.617909195Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 19:58:13.618334 containerd[1446]: time="2025-02-13T19:58:13.617969918Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=11053" Feb 13 19:58:13.618378 kubelet[2515]: E0213 19:58:13.618098 2515 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 19:58:13.618378 kubelet[2515]: E0213 19:58:13.618137 2515 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 19:58:13.618672 kubelet[2515]: E0213 19:58:13.618216 2515 kuberuntime_manager.go:1256] init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdsq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-42jkn_kube-flannel(25643ab7-6101-403b-ac20-77f1fa4c78ee): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel:v0.22.0": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 19:58:13.618754 kubelet[2515]: E0213 19:58:13.618249 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 19:58:17.897715 systemd[1]: Started sshd@12-10.0.0.134:22-10.0.0.1:45302.service - OpenSSH per-connection server daemon (10.0.0.1:45302). Feb 13 19:58:17.937695 sshd[3015]: Accepted publickey for core from 10.0.0.1 port 45302 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:17.939003 sshd[3015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:17.942583 systemd-logind[1425]: New session 13 of user core. Feb 13 19:58:17.952158 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:58:18.055749 sshd[3015]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:18.058599 systemd[1]: sshd@12-10.0.0.134:22-10.0.0.1:45302.service: Deactivated successfully. Feb 13 19:58:18.060155 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:58:18.061356 systemd-logind[1425]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:58:18.062154 systemd-logind[1425]: Removed session 13. Feb 13 19:58:23.066536 systemd[1]: Started sshd@13-10.0.0.134:22-10.0.0.1:52312.service - OpenSSH per-connection server daemon (10.0.0.1:52312). Feb 13 19:58:23.106170 sshd[3031]: Accepted publickey for core from 10.0.0.1 port 52312 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:23.107418 sshd[3031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:23.110944 systemd-logind[1425]: New session 14 of user core. Feb 13 19:58:23.117173 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:58:23.220997 sshd[3031]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:23.224311 systemd[1]: sshd@13-10.0.0.134:22-10.0.0.1:52312.service: Deactivated successfully. Feb 13 19:58:23.227517 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:58:23.228160 systemd-logind[1425]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:58:23.228942 systemd-logind[1425]: Removed session 14. Feb 13 19:58:26.494514 kubelet[2515]: E0213 19:58:26.494475 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:26.496814 kubelet[2515]: E0213 19:58:26.496204 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 19:58:28.233568 systemd[1]: Started sshd@14-10.0.0.134:22-10.0.0.1:52322.service - OpenSSH per-connection server daemon (10.0.0.1:52322). Feb 13 19:58:28.272704 sshd[3046]: Accepted publickey for core from 10.0.0.1 port 52322 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:28.273896 sshd[3046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:28.277217 systemd-logind[1425]: New session 15 of user core. Feb 13 19:58:28.287162 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:58:28.390528 sshd[3046]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:28.393702 systemd[1]: sshd@14-10.0.0.134:22-10.0.0.1:52322.service: Deactivated successfully. Feb 13 19:58:28.395313 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:58:28.397567 systemd-logind[1425]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:58:28.398338 systemd-logind[1425]: Removed session 15. Feb 13 19:58:33.400707 systemd[1]: Started sshd@15-10.0.0.134:22-10.0.0.1:41614.service - OpenSSH per-connection server daemon (10.0.0.1:41614). Feb 13 19:58:33.439703 sshd[3064]: Accepted publickey for core from 10.0.0.1 port 41614 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:33.440936 sshd[3064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:33.445028 systemd-logind[1425]: New session 16 of user core. Feb 13 19:58:33.459173 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:58:33.564057 sshd[3064]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:33.567776 systemd[1]: sshd@15-10.0.0.134:22-10.0.0.1:41614.service: Deactivated successfully. Feb 13 19:58:33.569367 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:58:33.569950 systemd-logind[1425]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:58:33.570762 systemd-logind[1425]: Removed session 16. Feb 13 19:58:35.493783 kubelet[2515]: E0213 19:58:35.493732 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:38.574820 systemd[1]: Started sshd@16-10.0.0.134:22-10.0.0.1:41618.service - OpenSSH per-connection server daemon (10.0.0.1:41618). Feb 13 19:58:38.613657 sshd[3079]: Accepted publickey for core from 10.0.0.1 port 41618 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:38.614870 sshd[3079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:38.618908 systemd-logind[1425]: New session 17 of user core. Feb 13 19:58:38.627187 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:58:38.730334 sshd[3079]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:38.733068 systemd[1]: sshd@16-10.0.0.134:22-10.0.0.1:41618.service: Deactivated successfully. Feb 13 19:58:38.734644 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:58:38.735939 systemd-logind[1425]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:58:38.736880 systemd-logind[1425]: Removed session 17. Feb 13 19:58:40.494800 kubelet[2515]: E0213 19:58:40.494498 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:40.495170 kubelet[2515]: E0213 19:58:40.495152 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 19:58:43.740598 systemd[1]: Started sshd@17-10.0.0.134:22-10.0.0.1:58270.service - OpenSSH per-connection server daemon (10.0.0.1:58270). Feb 13 19:58:43.781133 sshd[3094]: Accepted publickey for core from 10.0.0.1 port 58270 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:43.781938 sshd[3094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:43.785809 systemd-logind[1425]: New session 18 of user core. Feb 13 19:58:43.798152 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:58:43.901012 sshd[3094]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:43.903281 systemd[1]: sshd@17-10.0.0.134:22-10.0.0.1:58270.service: Deactivated successfully. Feb 13 19:58:43.904809 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:58:43.906173 systemd-logind[1425]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:58:43.907338 systemd-logind[1425]: Removed session 18. Feb 13 19:58:48.910556 systemd[1]: Started sshd@18-10.0.0.134:22-10.0.0.1:58282.service - OpenSSH per-connection server daemon (10.0.0.1:58282). Feb 13 19:58:48.951953 sshd[3110]: Accepted publickey for core from 10.0.0.1 port 58282 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:48.952351 sshd[3110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:48.956353 systemd-logind[1425]: New session 19 of user core. Feb 13 19:58:48.965156 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:58:49.067235 sshd[3110]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:49.070481 systemd[1]: sshd@18-10.0.0.134:22-10.0.0.1:58282.service: Deactivated successfully. Feb 13 19:58:49.072038 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:58:49.073649 systemd-logind[1425]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:58:49.074539 systemd-logind[1425]: Removed session 19. Feb 13 19:58:53.496199 kubelet[2515]: E0213 19:58:53.494292 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:54.079493 systemd[1]: Started sshd@19-10.0.0.134:22-10.0.0.1:37478.service - OpenSSH per-connection server daemon (10.0.0.1:37478). Feb 13 19:58:54.118450 sshd[3125]: Accepted publickey for core from 10.0.0.1 port 37478 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:54.119604 sshd[3125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:54.123318 systemd-logind[1425]: New session 20 of user core. Feb 13 19:58:54.144166 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:58:54.251368 sshd[3125]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:54.254454 systemd[1]: sshd@19-10.0.0.134:22-10.0.0.1:37478.service: Deactivated successfully. Feb 13 19:58:54.256130 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:58:54.256689 systemd-logind[1425]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:58:54.257696 systemd-logind[1425]: Removed session 20. Feb 13 19:58:54.494506 kubelet[2515]: E0213 19:58:54.494419 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:55.494164 kubelet[2515]: E0213 19:58:55.494134 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:55.494795 containerd[1446]: time="2025-02-13T19:58:55.494765486Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 19:58:56.624423 containerd[1446]: time="2025-02-13T19:58:56.624323279Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 19:58:56.624423 containerd[1446]: time="2025-02-13T19:58:56.624398720Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=11054" Feb 13 19:58:56.624819 kubelet[2515]: E0213 19:58:56.624509 2515 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 19:58:56.624819 kubelet[2515]: E0213 19:58:56.624552 2515 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 19:58:56.625073 kubelet[2515]: E0213 19:58:56.624640 2515 kuberuntime_manager.go:1256] init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdsq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-42jkn_kube-flannel(25643ab7-6101-403b-ac20-77f1fa4c78ee): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel:v0.22.0": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 19:58:56.625128 kubelet[2515]: E0213 19:58:56.624668 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 19:58:59.261334 systemd[1]: Started sshd@20-10.0.0.134:22-10.0.0.1:37480.service - OpenSSH per-connection server daemon (10.0.0.1:37480). Feb 13 19:58:59.302043 sshd[3140]: Accepted publickey for core from 10.0.0.1 port 37480 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:58:59.303274 sshd[3140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:59.307223 systemd-logind[1425]: New session 21 of user core. Feb 13 19:58:59.320152 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:58:59.424869 sshd[3140]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:59.428131 systemd[1]: sshd@20-10.0.0.134:22-10.0.0.1:37480.service: Deactivated successfully. Feb 13 19:58:59.429730 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:58:59.431080 systemd-logind[1425]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:58:59.432135 systemd-logind[1425]: Removed session 21. Feb 13 19:59:00.494771 kubelet[2515]: E0213 19:59:00.494395 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:04.435874 systemd[1]: Started sshd@21-10.0.0.134:22-10.0.0.1:50960.service - OpenSSH per-connection server daemon (10.0.0.1:50960). Feb 13 19:59:04.474603 sshd[3157]: Accepted publickey for core from 10.0.0.1 port 50960 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:04.475808 sshd[3157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:04.479319 systemd-logind[1425]: New session 22 of user core. Feb 13 19:59:04.490170 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:59:04.594058 sshd[3157]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:04.597731 systemd[1]: sshd@21-10.0.0.134:22-10.0.0.1:50960.service: Deactivated successfully. Feb 13 19:59:04.600553 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:59:04.601144 systemd-logind[1425]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:59:04.602105 systemd-logind[1425]: Removed session 22. Feb 13 19:59:09.494346 kubelet[2515]: E0213 19:59:09.494309 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:09.495369 kubelet[2515]: E0213 19:59:09.494871 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 19:59:09.604735 systemd[1]: Started sshd@22-10.0.0.134:22-10.0.0.1:50976.service - OpenSSH per-connection server daemon (10.0.0.1:50976). Feb 13 19:59:09.643703 sshd[3175]: Accepted publickey for core from 10.0.0.1 port 50976 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:09.644916 sshd[3175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:09.648392 systemd-logind[1425]: New session 23 of user core. Feb 13 19:59:09.659174 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:59:09.765269 sshd[3175]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:09.768138 systemd[1]: sshd@22-10.0.0.134:22-10.0.0.1:50976.service: Deactivated successfully. Feb 13 19:59:09.769715 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:59:09.771487 systemd-logind[1425]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:59:09.772578 systemd-logind[1425]: Removed session 23. Feb 13 19:59:14.781517 systemd[1]: Started sshd@23-10.0.0.134:22-10.0.0.1:60586.service - OpenSSH per-connection server daemon (10.0.0.1:60586). Feb 13 19:59:14.821485 sshd[3191]: Accepted publickey for core from 10.0.0.1 port 60586 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:14.822769 sshd[3191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:14.826383 systemd-logind[1425]: New session 24 of user core. Feb 13 19:59:14.836196 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:59:14.942934 sshd[3191]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:14.945461 systemd[1]: sshd@23-10.0.0.134:22-10.0.0.1:60586.service: Deactivated successfully. Feb 13 19:59:14.947114 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:59:14.948897 systemd-logind[1425]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:59:14.950655 systemd-logind[1425]: Removed session 24. Feb 13 19:59:16.566405 kubelet[2515]: E0213 19:59:16.566364 2515 kubelet_node_status.go:456] "Node not becoming ready in time after startup" Feb 13 19:59:19.953943 systemd[1]: Started sshd@24-10.0.0.134:22-10.0.0.1:60588.service - OpenSSH per-connection server daemon (10.0.0.1:60588). Feb 13 19:59:19.993180 sshd[3208]: Accepted publickey for core from 10.0.0.1 port 60588 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:19.994405 sshd[3208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:19.998080 systemd-logind[1425]: New session 25 of user core. Feb 13 19:59:20.018277 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:59:20.121455 sshd[3208]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:20.124602 systemd[1]: sshd@24-10.0.0.134:22-10.0.0.1:60588.service: Deactivated successfully. Feb 13 19:59:20.126741 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:59:20.128548 systemd-logind[1425]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:59:20.129354 systemd-logind[1425]: Removed session 25. Feb 13 19:59:21.561340 kubelet[2515]: E0213 19:59:21.561158 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:59:22.494112 kubelet[2515]: E0213 19:59:22.494053 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:22.494751 kubelet[2515]: E0213 19:59:22.494693 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 19:59:25.132780 systemd[1]: Started sshd@25-10.0.0.134:22-10.0.0.1:36762.service - OpenSSH per-connection server daemon (10.0.0.1:36762). Feb 13 19:59:25.172112 sshd[3224]: Accepted publickey for core from 10.0.0.1 port 36762 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:25.173341 sshd[3224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:25.177094 systemd-logind[1425]: New session 26 of user core. Feb 13 19:59:25.183177 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:59:25.286172 sshd[3224]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:25.289336 systemd[1]: sshd@25-10.0.0.134:22-10.0.0.1:36762.service: Deactivated successfully. Feb 13 19:59:25.291478 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:59:25.292041 systemd-logind[1425]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:59:25.292894 systemd-logind[1425]: Removed session 26. Feb 13 19:59:26.562550 kubelet[2515]: E0213 19:59:26.562491 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:59:30.296684 systemd[1]: Started sshd@26-10.0.0.134:22-10.0.0.1:36774.service - OpenSSH per-connection server daemon (10.0.0.1:36774). Feb 13 19:59:30.335365 sshd[3240]: Accepted publickey for core from 10.0.0.1 port 36774 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:30.336560 sshd[3240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:30.340675 systemd-logind[1425]: New session 27 of user core. Feb 13 19:59:30.354173 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:59:30.482467 sshd[3240]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:30.486799 systemd[1]: sshd@26-10.0.0.134:22-10.0.0.1:36774.service: Deactivated successfully. Feb 13 19:59:30.489987 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:59:30.490724 systemd-logind[1425]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:59:30.491483 systemd-logind[1425]: Removed session 27. Feb 13 19:59:31.563889 kubelet[2515]: E0213 19:59:31.563848 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:59:34.494744 kubelet[2515]: E0213 19:59:34.494551 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:34.495427 kubelet[2515]: E0213 19:59:34.495390 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 19:59:35.496527 systemd[1]: Started sshd@27-10.0.0.134:22-10.0.0.1:39336.service - OpenSSH per-connection server daemon (10.0.0.1:39336). Feb 13 19:59:35.539551 sshd[3259]: Accepted publickey for core from 10.0.0.1 port 39336 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:35.540816 sshd[3259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:35.544594 systemd-logind[1425]: New session 28 of user core. Feb 13 19:59:35.551171 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:59:35.656935 sshd[3259]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:35.659490 systemd[1]: sshd@27-10.0.0.134:22-10.0.0.1:39336.service: Deactivated successfully. Feb 13 19:59:35.661141 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:59:35.662464 systemd-logind[1425]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:59:35.663548 systemd-logind[1425]: Removed session 28. Feb 13 19:59:36.565173 kubelet[2515]: E0213 19:59:36.565121 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:59:40.667860 systemd[1]: Started sshd@28-10.0.0.134:22-10.0.0.1:39350.service - OpenSSH per-connection server daemon (10.0.0.1:39350). Feb 13 19:59:40.707571 sshd[3274]: Accepted publickey for core from 10.0.0.1 port 39350 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:40.708915 sshd[3274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:40.713209 systemd-logind[1425]: New session 29 of user core. Feb 13 19:59:40.719200 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 19:59:40.825827 sshd[3274]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:40.828391 systemd[1]: sshd@28-10.0.0.134:22-10.0.0.1:39350.service: Deactivated successfully. Feb 13 19:59:40.830161 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 19:59:40.831540 systemd-logind[1425]: Session 29 logged out. Waiting for processes to exit. Feb 13 19:59:40.832718 systemd-logind[1425]: Removed session 29. Feb 13 19:59:41.566145 kubelet[2515]: E0213 19:59:41.566057 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:59:45.837190 systemd[1]: Started sshd@29-10.0.0.134:22-10.0.0.1:58068.service - OpenSSH per-connection server daemon (10.0.0.1:58068). Feb 13 19:59:45.876908 sshd[3292]: Accepted publickey for core from 10.0.0.1 port 58068 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:45.878150 sshd[3292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:45.885299 systemd-logind[1425]: New session 30 of user core. Feb 13 19:59:45.890246 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 19:59:45.999029 sshd[3292]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:46.001745 systemd[1]: sshd@29-10.0.0.134:22-10.0.0.1:58068.service: Deactivated successfully. Feb 13 19:59:46.003437 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 19:59:46.004734 systemd-logind[1425]: Session 30 logged out. Waiting for processes to exit. Feb 13 19:59:46.005717 systemd-logind[1425]: Removed session 30. Feb 13 19:59:46.494629 kubelet[2515]: E0213 19:59:46.494590 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:46.495659 kubelet[2515]: E0213 19:59:46.495588 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 19:59:46.567106 kubelet[2515]: E0213 19:59:46.567060 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:59:48.494348 kubelet[2515]: E0213 19:59:48.494278 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:51.015174 systemd[1]: Started sshd@30-10.0.0.134:22-10.0.0.1:58082.service - OpenSSH per-connection server daemon (10.0.0.1:58082). Feb 13 19:59:51.056579 sshd[3309]: Accepted publickey for core from 10.0.0.1 port 58082 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:51.057884 sshd[3309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:51.061964 systemd-logind[1425]: New session 31 of user core. Feb 13 19:59:51.073166 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 19:59:51.180375 sshd[3309]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:51.183764 systemd[1]: sshd@30-10.0.0.134:22-10.0.0.1:58082.service: Deactivated successfully. Feb 13 19:59:51.185452 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 19:59:51.186662 systemd-logind[1425]: Session 31 logged out. Waiting for processes to exit. Feb 13 19:59:51.187485 systemd-logind[1425]: Removed session 31. Feb 13 19:59:51.567823 kubelet[2515]: E0213 19:59:51.567769 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:59:56.191814 systemd[1]: Started sshd@31-10.0.0.134:22-10.0.0.1:46768.service - OpenSSH per-connection server daemon (10.0.0.1:46768). Feb 13 19:59:56.231719 sshd[3324]: Accepted publickey for core from 10.0.0.1 port 46768 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 19:59:56.233010 sshd[3324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:59:56.236766 systemd-logind[1425]: New session 32 of user core. Feb 13 19:59:56.249185 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 19:59:56.352234 sshd[3324]: pam_unix(sshd:session): session closed for user core Feb 13 19:59:56.354731 systemd[1]: sshd@31-10.0.0.134:22-10.0.0.1:46768.service: Deactivated successfully. Feb 13 19:59:56.357198 systemd-logind[1425]: Session 32 logged out. Waiting for processes to exit. Feb 13 19:59:56.357299 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 19:59:56.358089 systemd-logind[1425]: Removed session 32. Feb 13 19:59:56.494525 kubelet[2515]: E0213 19:59:56.494271 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:56.569768 kubelet[2515]: E0213 19:59:56.569727 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:59:57.494142 kubelet[2515]: E0213 19:59:57.494102 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:58.494623 kubelet[2515]: E0213 19:59:58.494586 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:58.495580 kubelet[2515]: E0213 19:59:58.495117 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:00:01.364707 systemd[1]: Started sshd@32-10.0.0.134:22-10.0.0.1:46782.service - OpenSSH per-connection server daemon (10.0.0.1:46782). Feb 13 20:00:01.403533 sshd[3340]: Accepted publickey for core from 10.0.0.1 port 46782 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:01.404798 sshd[3340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:01.408329 systemd-logind[1425]: New session 33 of user core. Feb 13 20:00:01.418231 systemd[1]: Started session-33.scope - Session 33 of User core. Feb 13 20:00:01.522919 sshd[3340]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:01.526314 systemd[1]: sshd@32-10.0.0.134:22-10.0.0.1:46782.service: Deactivated successfully. Feb 13 20:00:01.527965 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 20:00:01.529711 systemd-logind[1425]: Session 33 logged out. Waiting for processes to exit. Feb 13 20:00:01.530565 systemd-logind[1425]: Removed session 33. Feb 13 20:00:01.571494 kubelet[2515]: E0213 20:00:01.571438 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:00:06.533778 systemd[1]: Started sshd@33-10.0.0.134:22-10.0.0.1:47392.service - OpenSSH per-connection server daemon (10.0.0.1:47392). Feb 13 20:00:06.572566 kubelet[2515]: E0213 20:00:06.572300 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:00:06.573798 sshd[3359]: Accepted publickey for core from 10.0.0.1 port 47392 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:06.574095 sshd[3359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:06.578662 systemd-logind[1425]: New session 34 of user core. Feb 13 20:00:06.589201 systemd[1]: Started session-34.scope - Session 34 of User core. Feb 13 20:00:06.698042 sshd[3359]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:06.700401 systemd[1]: sshd@33-10.0.0.134:22-10.0.0.1:47392.service: Deactivated successfully. Feb 13 20:00:06.702160 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 20:00:06.703438 systemd-logind[1425]: Session 34 logged out. Waiting for processes to exit. Feb 13 20:00:06.704437 systemd-logind[1425]: Removed session 34. Feb 13 20:00:10.494517 kubelet[2515]: E0213 20:00:10.494485 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:00:10.495389 kubelet[2515]: E0213 20:00:10.495157 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:00:11.573742 kubelet[2515]: E0213 20:00:11.573687 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:00:11.710745 systemd[1]: Started sshd@34-10.0.0.134:22-10.0.0.1:47400.service - OpenSSH per-connection server daemon (10.0.0.1:47400). Feb 13 20:00:11.749857 sshd[3375]: Accepted publickey for core from 10.0.0.1 port 47400 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:11.751070 sshd[3375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:11.754998 systemd-logind[1425]: New session 35 of user core. Feb 13 20:00:11.764168 systemd[1]: Started session-35.scope - Session 35 of User core. Feb 13 20:00:11.870866 sshd[3375]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:11.874110 systemd[1]: sshd@34-10.0.0.134:22-10.0.0.1:47400.service: Deactivated successfully. Feb 13 20:00:11.876933 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 20:00:11.878177 systemd-logind[1425]: Session 35 logged out. Waiting for processes to exit. Feb 13 20:00:11.879574 systemd-logind[1425]: Removed session 35. Feb 13 20:00:16.574646 kubelet[2515]: E0213 20:00:16.574610 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:00:16.881953 systemd[1]: Started sshd@35-10.0.0.134:22-10.0.0.1:51416.service - OpenSSH per-connection server daemon (10.0.0.1:51416). Feb 13 20:00:16.921407 sshd[3394]: Accepted publickey for core from 10.0.0.1 port 51416 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:16.922676 sshd[3394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:16.926583 systemd-logind[1425]: New session 36 of user core. Feb 13 20:00:16.936260 systemd[1]: Started session-36.scope - Session 36 of User core. Feb 13 20:00:17.042495 sshd[3394]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:17.045805 systemd[1]: sshd@35-10.0.0.134:22-10.0.0.1:51416.service: Deactivated successfully. Feb 13 20:00:17.047409 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 20:00:17.049507 systemd-logind[1425]: Session 36 logged out. Waiting for processes to exit. Feb 13 20:00:17.050622 systemd-logind[1425]: Removed session 36. Feb 13 20:00:21.576747 kubelet[2515]: E0213 20:00:21.576670 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:00:22.052491 systemd[1]: Started sshd@36-10.0.0.134:22-10.0.0.1:51430.service - OpenSSH per-connection server daemon (10.0.0.1:51430). Feb 13 20:00:22.091140 sshd[3410]: Accepted publickey for core from 10.0.0.1 port 51430 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:22.092339 sshd[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:22.096357 systemd-logind[1425]: New session 37 of user core. Feb 13 20:00:22.103157 systemd[1]: Started session-37.scope - Session 37 of User core. Feb 13 20:00:22.208316 sshd[3410]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:22.211649 systemd[1]: sshd@36-10.0.0.134:22-10.0.0.1:51430.service: Deactivated successfully. Feb 13 20:00:22.213826 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 20:00:22.214554 systemd-logind[1425]: Session 37 logged out. Waiting for processes to exit. Feb 13 20:00:22.215355 systemd-logind[1425]: Removed session 37. Feb 13 20:00:24.494438 kubelet[2515]: E0213 20:00:24.494244 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:00:24.495519 containerd[1446]: time="2025-02-13T20:00:24.495314664Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 20:00:25.825038 containerd[1446]: time="2025-02-13T20:00:25.824967830Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:00:25.825446 containerd[1446]: time="2025-02-13T20:00:25.825050632Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=13094" Feb 13 20:00:25.825480 kubelet[2515]: E0213 20:00:25.825216 2515 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:00:25.825480 kubelet[2515]: E0213 20:00:25.825257 2515 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:00:25.825735 kubelet[2515]: E0213 20:00:25.825371 2515 kuberuntime_manager.go:1256] init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdsq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-42jkn_kube-flannel(25643ab7-6101-403b-ac20-77f1fa4c78ee): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel:v0.22.0": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:00:25.825790 kubelet[2515]: E0213 20:00:25.825404 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:00:26.578032 kubelet[2515]: E0213 20:00:26.577982 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:00:27.219692 systemd[1]: Started sshd@37-10.0.0.134:22-10.0.0.1:58076.service - OpenSSH per-connection server daemon (10.0.0.1:58076). Feb 13 20:00:27.258871 sshd[3426]: Accepted publickey for core from 10.0.0.1 port 58076 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:27.260086 sshd[3426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:27.264134 systemd-logind[1425]: New session 38 of user core. Feb 13 20:00:27.277235 systemd[1]: Started session-38.scope - Session 38 of User core. Feb 13 20:00:27.382001 sshd[3426]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:27.385291 systemd[1]: sshd@37-10.0.0.134:22-10.0.0.1:58076.service: Deactivated successfully. Feb 13 20:00:27.386951 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 20:00:27.387720 systemd-logind[1425]: Session 38 logged out. Waiting for processes to exit. Feb 13 20:00:27.388804 systemd-logind[1425]: Removed session 38. Feb 13 20:00:30.494167 kubelet[2515]: E0213 20:00:30.494071 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:00:31.579298 kubelet[2515]: E0213 20:00:31.579244 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:00:32.392574 systemd[1]: Started sshd@38-10.0.0.134:22-10.0.0.1:58086.service - OpenSSH per-connection server daemon (10.0.0.1:58086). Feb 13 20:00:32.431314 sshd[3442]: Accepted publickey for core from 10.0.0.1 port 58086 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:32.432496 sshd[3442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:32.436321 systemd-logind[1425]: New session 39 of user core. Feb 13 20:00:32.442166 systemd[1]: Started session-39.scope - Session 39 of User core. Feb 13 20:00:32.546362 sshd[3442]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:32.549395 systemd[1]: sshd@38-10.0.0.134:22-10.0.0.1:58086.service: Deactivated successfully. Feb 13 20:00:32.552383 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 20:00:32.552938 systemd-logind[1425]: Session 39 logged out. Waiting for processes to exit. Feb 13 20:00:32.553966 systemd-logind[1425]: Removed session 39. Feb 13 20:00:36.580308 kubelet[2515]: E0213 20:00:36.580257 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:00:37.561626 systemd[1]: Started sshd@39-10.0.0.134:22-10.0.0.1:53510.service - OpenSSH per-connection server daemon (10.0.0.1:53510). Feb 13 20:00:37.600197 sshd[3460]: Accepted publickey for core from 10.0.0.1 port 53510 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:37.601351 sshd[3460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:37.605242 systemd-logind[1425]: New session 40 of user core. Feb 13 20:00:37.613154 systemd[1]: Started session-40.scope - Session 40 of User core. Feb 13 20:00:37.729516 sshd[3460]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:37.732442 systemd-logind[1425]: Session 40 logged out. Waiting for processes to exit. Feb 13 20:00:37.732600 systemd[1]: sshd@39-10.0.0.134:22-10.0.0.1:53510.service: Deactivated successfully. Feb 13 20:00:37.735403 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 20:00:37.736735 systemd-logind[1425]: Removed session 40. Feb 13 20:00:38.494379 kubelet[2515]: E0213 20:00:38.494098 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:00:38.495110 kubelet[2515]: E0213 20:00:38.494887 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:00:41.581377 kubelet[2515]: E0213 20:00:41.581309 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:00:42.740647 systemd[1]: Started sshd@40-10.0.0.134:22-10.0.0.1:60870.service - OpenSSH per-connection server daemon (10.0.0.1:60870). Feb 13 20:00:42.781450 sshd[3475]: Accepted publickey for core from 10.0.0.1 port 60870 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:42.782721 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:42.786294 systemd-logind[1425]: New session 41 of user core. Feb 13 20:00:42.796231 systemd[1]: Started session-41.scope - Session 41 of User core. Feb 13 20:00:42.902573 sshd[3475]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:42.911584 systemd[1]: sshd@40-10.0.0.134:22-10.0.0.1:60870.service: Deactivated successfully. Feb 13 20:00:42.913200 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 20:00:42.914381 systemd-logind[1425]: Session 41 logged out. Waiting for processes to exit. Feb 13 20:00:42.923434 systemd[1]: Started sshd@41-10.0.0.134:22-10.0.0.1:60884.service - OpenSSH per-connection server daemon (10.0.0.1:60884). Feb 13 20:00:42.924394 systemd-logind[1425]: Removed session 41. Feb 13 20:00:42.958505 sshd[3491]: Accepted publickey for core from 10.0.0.1 port 60884 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:42.959663 sshd[3491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:42.963098 systemd-logind[1425]: New session 42 of user core. Feb 13 20:00:42.968174 systemd[1]: Started session-42.scope - Session 42 of User core. Feb 13 20:00:43.105521 sshd[3491]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:43.116970 systemd[1]: sshd@41-10.0.0.134:22-10.0.0.1:60884.service: Deactivated successfully. Feb 13 20:00:43.120515 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 20:00:43.122071 systemd-logind[1425]: Session 42 logged out. Waiting for processes to exit. Feb 13 20:00:43.127665 systemd[1]: Started sshd@42-10.0.0.134:22-10.0.0.1:60888.service - OpenSSH per-connection server daemon (10.0.0.1:60888). Feb 13 20:00:43.129268 systemd-logind[1425]: Removed session 42. Feb 13 20:00:43.183207 sshd[3503]: Accepted publickey for core from 10.0.0.1 port 60888 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:43.184397 sshd[3503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:43.188260 systemd-logind[1425]: New session 43 of user core. Feb 13 20:00:43.205163 systemd[1]: Started session-43.scope - Session 43 of User core. Feb 13 20:00:43.313719 sshd[3503]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:43.317542 systemd[1]: sshd@42-10.0.0.134:22-10.0.0.1:60888.service: Deactivated successfully. Feb 13 20:00:43.320128 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 20:00:43.320909 systemd-logind[1425]: Session 43 logged out. Waiting for processes to exit. Feb 13 20:00:43.321824 systemd-logind[1425]: Removed session 43. Feb 13 20:00:46.582472 kubelet[2515]: E0213 20:00:46.582422 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:00:48.327493 systemd[1]: Started sshd@43-10.0.0.134:22-10.0.0.1:60898.service - OpenSSH per-connection server daemon (10.0.0.1:60898). Feb 13 20:00:48.373533 sshd[3517]: Accepted publickey for core from 10.0.0.1 port 60898 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:48.374744 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:48.383409 systemd-logind[1425]: New session 44 of user core. Feb 13 20:00:48.394244 systemd[1]: Started session-44.scope - Session 44 of User core. Feb 13 20:00:48.499305 sshd[3517]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:48.502478 systemd[1]: sshd@43-10.0.0.134:22-10.0.0.1:60898.service: Deactivated successfully. Feb 13 20:00:48.504567 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 20:00:48.505254 systemd-logind[1425]: Session 44 logged out. Waiting for processes to exit. Feb 13 20:00:48.506087 systemd-logind[1425]: Removed session 44. Feb 13 20:00:51.583064 kubelet[2515]: E0213 20:00:51.583000 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:00:52.494242 kubelet[2515]: E0213 20:00:52.494206 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:00:52.494794 kubelet[2515]: E0213 20:00:52.494764 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:00:53.518305 systemd[1]: Started sshd@44-10.0.0.134:22-10.0.0.1:57208.service - OpenSSH per-connection server daemon (10.0.0.1:57208). Feb 13 20:00:53.557932 sshd[3532]: Accepted publickey for core from 10.0.0.1 port 57208 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:53.559333 sshd[3532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:53.563190 systemd-logind[1425]: New session 45 of user core. Feb 13 20:00:53.581194 systemd[1]: Started session-45.scope - Session 45 of User core. Feb 13 20:00:53.686083 sshd[3532]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:53.689145 systemd[1]: sshd@44-10.0.0.134:22-10.0.0.1:57208.service: Deactivated successfully. Feb 13 20:00:53.690791 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 20:00:53.691421 systemd-logind[1425]: Session 45 logged out. Waiting for processes to exit. Feb 13 20:00:53.692278 systemd-logind[1425]: Removed session 45. Feb 13 20:00:56.583579 kubelet[2515]: E0213 20:00:56.583520 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:00:58.696537 systemd[1]: Started sshd@45-10.0.0.134:22-10.0.0.1:57220.service - OpenSSH per-connection server daemon (10.0.0.1:57220). Feb 13 20:00:58.735441 sshd[3547]: Accepted publickey for core from 10.0.0.1 port 57220 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:00:58.736561 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:58.739977 systemd-logind[1425]: New session 46 of user core. Feb 13 20:00:58.751223 systemd[1]: Started session-46.scope - Session 46 of User core. Feb 13 20:00:58.853795 sshd[3547]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:58.856998 systemd[1]: sshd@45-10.0.0.134:22-10.0.0.1:57220.service: Deactivated successfully. Feb 13 20:00:58.858779 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 20:00:58.859398 systemd-logind[1425]: Session 46 logged out. Waiting for processes to exit. Feb 13 20:00:58.860136 systemd-logind[1425]: Removed session 46. Feb 13 20:01:01.584721 kubelet[2515]: E0213 20:01:01.584683 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:01:03.864583 systemd[1]: Started sshd@46-10.0.0.134:22-10.0.0.1:44294.service - OpenSSH per-connection server daemon (10.0.0.1:44294). Feb 13 20:01:03.903125 sshd[3563]: Accepted publickey for core from 10.0.0.1 port 44294 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:01:03.904306 sshd[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:01:03.908079 systemd-logind[1425]: New session 47 of user core. Feb 13 20:01:03.917169 systemd[1]: Started session-47.scope - Session 47 of User core. Feb 13 20:01:04.020792 sshd[3563]: pam_unix(sshd:session): session closed for user core Feb 13 20:01:04.024673 systemd[1]: sshd@46-10.0.0.134:22-10.0.0.1:44294.service: Deactivated successfully. Feb 13 20:01:04.026379 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 20:01:04.027059 systemd-logind[1425]: Session 47 logged out. Waiting for processes to exit. Feb 13 20:01:04.027859 systemd-logind[1425]: Removed session 47. Feb 13 20:01:06.585277 kubelet[2515]: E0213 20:01:06.585174 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:01:07.493579 kubelet[2515]: E0213 20:01:07.493544 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:07.494291 kubelet[2515]: E0213 20:01:07.494095 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:01:09.031385 systemd[1]: Started sshd@47-10.0.0.134:22-10.0.0.1:44298.service - OpenSSH per-connection server daemon (10.0.0.1:44298). Feb 13 20:01:09.105242 sshd[3577]: Accepted publickey for core from 10.0.0.1 port 44298 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:01:09.106536 sshd[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:01:09.111273 systemd-logind[1425]: New session 48 of user core. Feb 13 20:01:09.120193 systemd[1]: Started session-48.scope - Session 48 of User core. Feb 13 20:01:09.232699 sshd[3577]: pam_unix(sshd:session): session closed for user core Feb 13 20:01:09.236401 systemd[1]: sshd@47-10.0.0.134:22-10.0.0.1:44298.service: Deactivated successfully. Feb 13 20:01:09.237985 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 20:01:09.238780 systemd-logind[1425]: Session 48 logged out. Waiting for processes to exit. Feb 13 20:01:09.239800 systemd-logind[1425]: Removed session 48. Feb 13 20:01:10.494550 kubelet[2515]: E0213 20:01:10.494506 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:11.587091 kubelet[2515]: E0213 20:01:11.586850 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:01:14.243857 systemd[1]: Started sshd@48-10.0.0.134:22-10.0.0.1:53628.service - OpenSSH per-connection server daemon (10.0.0.1:53628). Feb 13 20:01:14.283775 sshd[3592]: Accepted publickey for core from 10.0.0.1 port 53628 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:01:14.285174 sshd[3592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:01:14.289107 systemd-logind[1425]: New session 49 of user core. Feb 13 20:01:14.298192 systemd[1]: Started session-49.scope - Session 49 of User core. Feb 13 20:01:14.404747 sshd[3592]: pam_unix(sshd:session): session closed for user core Feb 13 20:01:14.408642 systemd[1]: sshd@48-10.0.0.134:22-10.0.0.1:53628.service: Deactivated successfully. Feb 13 20:01:14.410424 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 20:01:14.412238 systemd-logind[1425]: Session 49 logged out. Waiting for processes to exit. Feb 13 20:01:14.414610 systemd-logind[1425]: Removed session 49. Feb 13 20:01:15.494550 kubelet[2515]: E0213 20:01:15.494511 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:16.588141 kubelet[2515]: E0213 20:01:16.588099 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:01:19.416040 systemd[1]: Started sshd@49-10.0.0.134:22-10.0.0.1:53636.service - OpenSSH per-connection server daemon (10.0.0.1:53636). Feb 13 20:01:19.454988 sshd[3609]: Accepted publickey for core from 10.0.0.1 port 53636 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:01:19.456267 sshd[3609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:01:19.459605 systemd-logind[1425]: New session 50 of user core. Feb 13 20:01:19.476158 systemd[1]: Started session-50.scope - Session 50 of User core. Feb 13 20:01:19.578849 sshd[3609]: pam_unix(sshd:session): session closed for user core Feb 13 20:01:19.581844 systemd[1]: sshd@49-10.0.0.134:22-10.0.0.1:53636.service: Deactivated successfully. Feb 13 20:01:19.583414 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 20:01:19.584652 systemd-logind[1425]: Session 50 logged out. Waiting for processes to exit. Feb 13 20:01:19.585421 systemd-logind[1425]: Removed session 50. Feb 13 20:01:20.493711 kubelet[2515]: E0213 20:01:20.493670 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:21.494257 kubelet[2515]: E0213 20:01:21.494215 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:21.495186 kubelet[2515]: E0213 20:01:21.494957 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:01:21.589341 kubelet[2515]: E0213 20:01:21.589317 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:01:24.593710 systemd[1]: Started sshd@50-10.0.0.134:22-10.0.0.1:40350.service - OpenSSH per-connection server daemon (10.0.0.1:40350). Feb 13 20:01:24.633702 sshd[3624]: Accepted publickey for core from 10.0.0.1 port 40350 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:01:24.634874 sshd[3624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:01:24.638886 systemd-logind[1425]: New session 51 of user core. Feb 13 20:01:24.650193 systemd[1]: Started session-51.scope - Session 51 of User core. Feb 13 20:01:24.757921 sshd[3624]: pam_unix(sshd:session): session closed for user core Feb 13 20:01:24.761136 systemd[1]: sshd@50-10.0.0.134:22-10.0.0.1:40350.service: Deactivated successfully. Feb 13 20:01:24.763502 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 20:01:24.764264 systemd-logind[1425]: Session 51 logged out. Waiting for processes to exit. Feb 13 20:01:24.765295 systemd-logind[1425]: Removed session 51. Feb 13 20:01:26.590647 kubelet[2515]: E0213 20:01:26.590612 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:01:29.768881 systemd[1]: Started sshd@51-10.0.0.134:22-10.0.0.1:40360.service - OpenSSH per-connection server daemon (10.0.0.1:40360). Feb 13 20:01:29.809389 sshd[3639]: Accepted publickey for core from 10.0.0.1 port 40360 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:01:29.811183 sshd[3639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:01:29.816434 systemd-logind[1425]: New session 52 of user core. Feb 13 20:01:29.822172 systemd[1]: Started session-52.scope - Session 52 of User core. Feb 13 20:01:29.931585 sshd[3639]: pam_unix(sshd:session): session closed for user core Feb 13 20:01:29.935237 systemd[1]: sshd@51-10.0.0.134:22-10.0.0.1:40360.service: Deactivated successfully. Feb 13 20:01:29.937435 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 20:01:29.938984 systemd-logind[1425]: Session 52 logged out. Waiting for processes to exit. Feb 13 20:01:29.940585 systemd-logind[1425]: Removed session 52. Feb 13 20:01:31.592795 kubelet[2515]: E0213 20:01:31.592748 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:01:34.944597 systemd[1]: Started sshd@52-10.0.0.134:22-10.0.0.1:39724.service - OpenSSH per-connection server daemon (10.0.0.1:39724). Feb 13 20:01:34.983927 sshd[3655]: Accepted publickey for core from 10.0.0.1 port 39724 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:01:34.985128 sshd[3655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:01:34.989000 systemd-logind[1425]: New session 53 of user core. Feb 13 20:01:34.996163 systemd[1]: Started session-53.scope - Session 53 of User core. Feb 13 20:01:35.101063 sshd[3655]: pam_unix(sshd:session): session closed for user core Feb 13 20:01:35.104067 systemd[1]: sshd@52-10.0.0.134:22-10.0.0.1:39724.service: Deactivated successfully. Feb 13 20:01:35.106383 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 20:01:35.107124 systemd-logind[1425]: Session 53 logged out. Waiting for processes to exit. Feb 13 20:01:35.108236 systemd-logind[1425]: Removed session 53. Feb 13 20:01:35.494062 kubelet[2515]: E0213 20:01:35.493939 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:35.494676 kubelet[2515]: E0213 20:01:35.494632 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:01:36.593620 kubelet[2515]: E0213 20:01:36.593566 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:01:40.111662 systemd[1]: Started sshd@53-10.0.0.134:22-10.0.0.1:39728.service - OpenSSH per-connection server daemon (10.0.0.1:39728). Feb 13 20:01:40.152297 sshd[3670]: Accepted publickey for core from 10.0.0.1 port 39728 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:01:40.153942 sshd[3670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:01:40.159489 systemd-logind[1425]: New session 54 of user core. Feb 13 20:01:40.167236 systemd[1]: Started session-54.scope - Session 54 of User core. Feb 13 20:01:40.272302 sshd[3670]: pam_unix(sshd:session): session closed for user core Feb 13 20:01:40.275547 systemd[1]: sshd@53-10.0.0.134:22-10.0.0.1:39728.service: Deactivated successfully. Feb 13 20:01:40.277307 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 20:01:40.279322 systemd-logind[1425]: Session 54 logged out. Waiting for processes to exit. Feb 13 20:01:40.280232 systemd-logind[1425]: Removed session 54. Feb 13 20:01:41.595314 kubelet[2515]: E0213 20:01:41.595165 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:01:45.286608 systemd[1]: Started sshd@54-10.0.0.134:22-10.0.0.1:54628.service - OpenSSH per-connection server daemon (10.0.0.1:54628). Feb 13 20:01:45.325425 sshd[3685]: Accepted publickey for core from 10.0.0.1 port 54628 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:01:45.326845 sshd[3685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:01:45.331462 systemd-logind[1425]: New session 55 of user core. Feb 13 20:01:45.337268 systemd[1]: Started session-55.scope - Session 55 of User core. Feb 13 20:01:45.440413 sshd[3685]: pam_unix(sshd:session): session closed for user core Feb 13 20:01:45.443568 systemd[1]: sshd@54-10.0.0.134:22-10.0.0.1:54628.service: Deactivated successfully. Feb 13 20:01:45.445261 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 20:01:45.445827 systemd-logind[1425]: Session 55 logged out. Waiting for processes to exit. Feb 13 20:01:45.446897 systemd-logind[1425]: Removed session 55. Feb 13 20:01:46.494073 kubelet[2515]: E0213 20:01:46.493976 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:46.596486 kubelet[2515]: E0213 20:01:46.596444 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:01:48.493646 kubelet[2515]: E0213 20:01:48.493540 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:48.494389 kubelet[2515]: E0213 20:01:48.494113 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:01:50.455061 systemd[1]: Started sshd@55-10.0.0.134:22-10.0.0.1:54636.service - OpenSSH per-connection server daemon (10.0.0.1:54636). Feb 13 20:01:50.495294 sshd[3699]: Accepted publickey for core from 10.0.0.1 port 54636 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:01:50.496827 sshd[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:01:50.500660 systemd-logind[1425]: New session 56 of user core. Feb 13 20:01:50.512165 systemd[1]: Started session-56.scope - Session 56 of User core. Feb 13 20:01:50.617557 sshd[3699]: pam_unix(sshd:session): session closed for user core Feb 13 20:01:50.620662 systemd[1]: sshd@55-10.0.0.134:22-10.0.0.1:54636.service: Deactivated successfully. Feb 13 20:01:50.622294 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 20:01:50.622823 systemd-logind[1425]: Session 56 logged out. Waiting for processes to exit. Feb 13 20:01:50.623543 systemd-logind[1425]: Removed session 56. Feb 13 20:01:51.598451 kubelet[2515]: E0213 20:01:51.598397 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:01:55.628583 systemd[1]: Started sshd@56-10.0.0.134:22-10.0.0.1:55638.service - OpenSSH per-connection server daemon (10.0.0.1:55638). Feb 13 20:01:55.668357 sshd[3713]: Accepted publickey for core from 10.0.0.1 port 55638 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:01:55.669495 sshd[3713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:01:55.673978 systemd-logind[1425]: New session 57 of user core. Feb 13 20:01:55.684172 systemd[1]: Started session-57.scope - Session 57 of User core. Feb 13 20:01:55.788736 sshd[3713]: pam_unix(sshd:session): session closed for user core Feb 13 20:01:55.791392 systemd[1]: sshd@56-10.0.0.134:22-10.0.0.1:55638.service: Deactivated successfully. Feb 13 20:01:55.793118 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 20:01:55.794249 systemd-logind[1425]: Session 57 logged out. Waiting for processes to exit. Feb 13 20:01:55.795047 systemd-logind[1425]: Removed session 57. Feb 13 20:01:56.599730 kubelet[2515]: E0213 20:01:56.599677 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:02:00.497528 kubelet[2515]: E0213 20:02:00.493515 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:00.497528 kubelet[2515]: E0213 20:02:00.494330 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:02:00.815326 systemd[1]: Started sshd@57-10.0.0.134:22-10.0.0.1:55652.service - OpenSSH per-connection server daemon (10.0.0.1:55652). Feb 13 20:02:00.850570 sshd[3727]: Accepted publickey for core from 10.0.0.1 port 55652 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:02:00.851771 sshd[3727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:00.858074 systemd-logind[1425]: New session 58 of user core. Feb 13 20:02:00.870257 systemd[1]: Started session-58.scope - Session 58 of User core. Feb 13 20:02:00.973209 sshd[3727]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:00.976183 systemd[1]: sshd@57-10.0.0.134:22-10.0.0.1:55652.service: Deactivated successfully. Feb 13 20:02:00.977872 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 20:02:00.978525 systemd-logind[1425]: Session 58 logged out. Waiting for processes to exit. Feb 13 20:02:00.979327 systemd-logind[1425]: Removed session 58. Feb 13 20:02:01.600938 kubelet[2515]: E0213 20:02:01.600896 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:02:05.984846 systemd[1]: Started sshd@58-10.0.0.134:22-10.0.0.1:35854.service - OpenSSH per-connection server daemon (10.0.0.1:35854). Feb 13 20:02:06.026473 sshd[3743]: Accepted publickey for core from 10.0.0.1 port 35854 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:02:06.028051 sshd[3743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:06.031820 systemd-logind[1425]: New session 59 of user core. Feb 13 20:02:06.043178 systemd[1]: Started session-59.scope - Session 59 of User core. Feb 13 20:02:06.151103 sshd[3743]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:06.153658 systemd[1]: sshd@58-10.0.0.134:22-10.0.0.1:35854.service: Deactivated successfully. Feb 13 20:02:06.155283 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 20:02:06.156713 systemd-logind[1425]: Session 59 logged out. Waiting for processes to exit. Feb 13 20:02:06.157747 systemd-logind[1425]: Removed session 59. Feb 13 20:02:06.602291 kubelet[2515]: E0213 20:02:06.602240 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:02:11.162552 systemd[1]: Started sshd@59-10.0.0.134:22-10.0.0.1:35866.service - OpenSSH per-connection server daemon (10.0.0.1:35866). Feb 13 20:02:11.201930 sshd[3758]: Accepted publickey for core from 10.0.0.1 port 35866 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:02:11.203239 sshd[3758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:11.208171 systemd-logind[1425]: New session 60 of user core. Feb 13 20:02:11.216283 systemd[1]: Started session-60.scope - Session 60 of User core. Feb 13 20:02:11.321986 sshd[3758]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:11.325764 systemd[1]: sshd@59-10.0.0.134:22-10.0.0.1:35866.service: Deactivated successfully. Feb 13 20:02:11.328202 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 20:02:11.329094 systemd-logind[1425]: Session 60 logged out. Waiting for processes to exit. Feb 13 20:02:11.330188 systemd-logind[1425]: Removed session 60. Feb 13 20:02:11.494415 kubelet[2515]: E0213 20:02:11.494388 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:11.495245 kubelet[2515]: E0213 20:02:11.495220 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:02:11.603253 kubelet[2515]: E0213 20:02:11.603197 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:02:16.336807 systemd[1]: Started sshd@60-10.0.0.134:22-10.0.0.1:45460.service - OpenSSH per-connection server daemon (10.0.0.1:45460). Feb 13 20:02:16.375587 sshd[3773]: Accepted publickey for core from 10.0.0.1 port 45460 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:02:16.376769 sshd[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:16.380798 systemd-logind[1425]: New session 61 of user core. Feb 13 20:02:16.394172 systemd[1]: Started session-61.scope - Session 61 of User core. Feb 13 20:02:16.504816 sshd[3773]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:16.508036 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 20:02:16.509282 systemd[1]: sshd@60-10.0.0.134:22-10.0.0.1:45460.service: Deactivated successfully. Feb 13 20:02:16.511113 systemd-logind[1425]: Session 61 logged out. Waiting for processes to exit. Feb 13 20:02:16.512480 systemd-logind[1425]: Removed session 61. Feb 13 20:02:16.604938 kubelet[2515]: E0213 20:02:16.604784 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:02:21.517686 systemd[1]: Started sshd@61-10.0.0.134:22-10.0.0.1:45470.service - OpenSSH per-connection server daemon (10.0.0.1:45470). Feb 13 20:02:21.556985 sshd[3790]: Accepted publickey for core from 10.0.0.1 port 45470 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:02:21.558905 sshd[3790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:21.566294 systemd-logind[1425]: New session 62 of user core. Feb 13 20:02:21.579206 systemd[1]: Started session-62.scope - Session 62 of User core. Feb 13 20:02:21.606575 kubelet[2515]: E0213 20:02:21.606507 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:02:21.699548 sshd[3790]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:21.702877 systemd[1]: sshd@61-10.0.0.134:22-10.0.0.1:45470.service: Deactivated successfully. Feb 13 20:02:21.704536 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 20:02:21.705983 systemd-logind[1425]: Session 62 logged out. Waiting for processes to exit. Feb 13 20:02:21.706867 systemd-logind[1425]: Removed session 62. Feb 13 20:02:22.494629 kubelet[2515]: E0213 20:02:22.494387 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:22.497764 kubelet[2515]: E0213 20:02:22.497270 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:02:26.494629 kubelet[2515]: E0213 20:02:26.494594 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:26.607917 kubelet[2515]: E0213 20:02:26.607871 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:02:26.710617 systemd[1]: Started sshd@62-10.0.0.134:22-10.0.0.1:43774.service - OpenSSH per-connection server daemon (10.0.0.1:43774). Feb 13 20:02:26.749580 sshd[3805]: Accepted publickey for core from 10.0.0.1 port 43774 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:02:26.750685 sshd[3805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:26.754642 systemd-logind[1425]: New session 63 of user core. Feb 13 20:02:26.765168 systemd[1]: Started session-63.scope - Session 63 of User core. Feb 13 20:02:26.874138 sshd[3805]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:26.877997 systemd[1]: sshd@62-10.0.0.134:22-10.0.0.1:43774.service: Deactivated successfully. Feb 13 20:02:26.879577 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 20:02:26.880165 systemd-logind[1425]: Session 63 logged out. Waiting for processes to exit. Feb 13 20:02:26.880950 systemd-logind[1425]: Removed session 63. Feb 13 20:02:31.494757 kubelet[2515]: E0213 20:02:31.494714 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:31.609068 kubelet[2515]: E0213 20:02:31.608995 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:02:31.884648 systemd[1]: Started sshd@63-10.0.0.134:22-10.0.0.1:43788.service - OpenSSH per-connection server daemon (10.0.0.1:43788). Feb 13 20:02:31.924034 sshd[3821]: Accepted publickey for core from 10.0.0.1 port 43788 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:02:31.925344 sshd[3821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:31.929400 systemd-logind[1425]: New session 64 of user core. Feb 13 20:02:31.941181 systemd[1]: Started session-64.scope - Session 64 of User core. Feb 13 20:02:32.048772 sshd[3821]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:32.052056 systemd[1]: sshd@63-10.0.0.134:22-10.0.0.1:43788.service: Deactivated successfully. Feb 13 20:02:32.054288 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 20:02:32.055006 systemd-logind[1425]: Session 64 logged out. Waiting for processes to exit. Feb 13 20:02:32.056600 systemd-logind[1425]: Removed session 64. Feb 13 20:02:36.494630 kubelet[2515]: E0213 20:02:36.494529 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:36.609776 kubelet[2515]: E0213 20:02:36.609736 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:02:37.062193 systemd[1]: Started sshd@64-10.0.0.134:22-10.0.0.1:46580.service - OpenSSH per-connection server daemon (10.0.0.1:46580). Feb 13 20:02:37.100897 sshd[3838]: Accepted publickey for core from 10.0.0.1 port 46580 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:02:37.102076 sshd[3838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:37.105456 systemd-logind[1425]: New session 65 of user core. Feb 13 20:02:37.117189 systemd[1]: Started session-65.scope - Session 65 of User core. Feb 13 20:02:37.241662 sshd[3838]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:37.244208 systemd[1]: sshd@64-10.0.0.134:22-10.0.0.1:46580.service: Deactivated successfully. Feb 13 20:02:37.245860 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 20:02:37.247233 systemd-logind[1425]: Session 65 logged out. Waiting for processes to exit. Feb 13 20:02:37.248489 systemd-logind[1425]: Removed session 65. Feb 13 20:02:37.494014 kubelet[2515]: E0213 20:02:37.493561 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:37.494462 kubelet[2515]: E0213 20:02:37.494425 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:02:41.610739 kubelet[2515]: E0213 20:02:41.610656 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:02:42.253649 systemd[1]: Started sshd@65-10.0.0.134:22-10.0.0.1:46582.service - OpenSSH per-connection server daemon (10.0.0.1:46582). Feb 13 20:02:42.296030 sshd[3853]: Accepted publickey for core from 10.0.0.1 port 46582 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:02:42.297348 sshd[3853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:42.301517 systemd-logind[1425]: New session 66 of user core. Feb 13 20:02:42.309236 systemd[1]: Started session-66.scope - Session 66 of User core. Feb 13 20:02:42.432097 sshd[3853]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:42.435955 systemd[1]: sshd@65-10.0.0.134:22-10.0.0.1:46582.service: Deactivated successfully. Feb 13 20:02:42.437777 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 20:02:42.438422 systemd-logind[1425]: Session 66 logged out. Waiting for processes to exit. Feb 13 20:02:42.439660 systemd-logind[1425]: Removed session 66. Feb 13 20:02:46.611342 kubelet[2515]: E0213 20:02:46.611265 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:02:47.442672 systemd[1]: Started sshd@66-10.0.0.134:22-10.0.0.1:51100.service - OpenSSH per-connection server daemon (10.0.0.1:51100). Feb 13 20:02:47.481520 sshd[3868]: Accepted publickey for core from 10.0.0.1 port 51100 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:02:47.482630 sshd[3868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:47.486367 systemd-logind[1425]: New session 67 of user core. Feb 13 20:02:47.497191 systemd[1]: Started session-67.scope - Session 67 of User core. Feb 13 20:02:47.606084 sshd[3868]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:47.608632 systemd[1]: sshd@66-10.0.0.134:22-10.0.0.1:51100.service: Deactivated successfully. Feb 13 20:02:47.611289 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 20:02:47.613531 systemd-logind[1425]: Session 67 logged out. Waiting for processes to exit. Feb 13 20:02:47.615007 systemd-logind[1425]: Removed session 67. Feb 13 20:02:48.495137 kubelet[2515]: E0213 20:02:48.494907 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:48.495950 kubelet[2515]: E0213 20:02:48.495884 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:02:51.613210 kubelet[2515]: E0213 20:02:51.612895 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:02:52.617600 systemd[1]: Started sshd@67-10.0.0.134:22-10.0.0.1:53076.service - OpenSSH per-connection server daemon (10.0.0.1:53076). Feb 13 20:02:52.660194 sshd[3882]: Accepted publickey for core from 10.0.0.1 port 53076 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:02:52.661937 sshd[3882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:52.667729 systemd-logind[1425]: New session 68 of user core. Feb 13 20:02:52.675212 systemd[1]: Started session-68.scope - Session 68 of User core. Feb 13 20:02:52.789788 sshd[3882]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:52.793138 systemd[1]: sshd@67-10.0.0.134:22-10.0.0.1:53076.service: Deactivated successfully. Feb 13 20:02:52.795014 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 20:02:52.797557 systemd-logind[1425]: Session 68 logged out. Waiting for processes to exit. Feb 13 20:02:52.798490 systemd-logind[1425]: Removed session 68. Feb 13 20:02:56.613762 kubelet[2515]: E0213 20:02:56.613647 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:02:57.800747 systemd[1]: Started sshd@68-10.0.0.134:22-10.0.0.1:53090.service - OpenSSH per-connection server daemon (10.0.0.1:53090). Feb 13 20:02:57.840166 sshd[3896]: Accepted publickey for core from 10.0.0.1 port 53090 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:02:57.841531 sshd[3896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:57.845325 systemd-logind[1425]: New session 69 of user core. Feb 13 20:02:57.857387 systemd[1]: Started session-69.scope - Session 69 of User core. Feb 13 20:02:57.968253 sshd[3896]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:57.971697 systemd[1]: sshd@68-10.0.0.134:22-10.0.0.1:53090.service: Deactivated successfully. Feb 13 20:02:57.974153 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 20:02:57.974864 systemd-logind[1425]: Session 69 logged out. Waiting for processes to exit. Feb 13 20:02:57.975663 systemd-logind[1425]: Removed session 69. Feb 13 20:02:59.494411 kubelet[2515]: E0213 20:02:59.494223 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:59.494826 kubelet[2515]: E0213 20:02:59.494775 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:03:01.615174 kubelet[2515]: E0213 20:03:01.615136 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:03:02.979618 systemd[1]: Started sshd@69-10.0.0.134:22-10.0.0.1:60382.service - OpenSSH per-connection server daemon (10.0.0.1:60382). Feb 13 20:03:03.020297 sshd[3911]: Accepted publickey for core from 10.0.0.1 port 60382 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:03:03.021308 sshd[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:03.024975 systemd-logind[1425]: New session 70 of user core. Feb 13 20:03:03.034193 systemd[1]: Started session-70.scope - Session 70 of User core. Feb 13 20:03:03.141068 sshd[3911]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:03.144260 systemd[1]: sshd@69-10.0.0.134:22-10.0.0.1:60382.service: Deactivated successfully. Feb 13 20:03:03.147667 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 20:03:03.148715 systemd-logind[1425]: Session 70 logged out. Waiting for processes to exit. Feb 13 20:03:03.149895 systemd-logind[1425]: Removed session 70. Feb 13 20:03:03.494336 kubelet[2515]: E0213 20:03:03.494301 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:03:06.617338 kubelet[2515]: E0213 20:03:06.617300 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:03:08.155828 systemd[1]: Started sshd@70-10.0.0.134:22-10.0.0.1:60398.service - OpenSSH per-connection server daemon (10.0.0.1:60398). Feb 13 20:03:08.195193 sshd[3927]: Accepted publickey for core from 10.0.0.1 port 60398 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:03:08.196396 sshd[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:08.200286 systemd-logind[1425]: New session 71 of user core. Feb 13 20:03:08.214181 systemd[1]: Started session-71.scope - Session 71 of User core. Feb 13 20:03:08.322681 sshd[3927]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:08.325138 systemd[1]: sshd@70-10.0.0.134:22-10.0.0.1:60398.service: Deactivated successfully. Feb 13 20:03:08.326692 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 20:03:08.328397 systemd-logind[1425]: Session 71 logged out. Waiting for processes to exit. Feb 13 20:03:08.329337 systemd-logind[1425]: Removed session 71. Feb 13 20:03:11.618803 kubelet[2515]: E0213 20:03:11.618736 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:03:13.337747 systemd[1]: Started sshd@71-10.0.0.134:22-10.0.0.1:33914.service - OpenSSH per-connection server daemon (10.0.0.1:33914). Feb 13 20:03:13.376761 sshd[3942]: Accepted publickey for core from 10.0.0.1 port 33914 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:03:13.378108 sshd[3942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:13.381822 systemd-logind[1425]: New session 72 of user core. Feb 13 20:03:13.393212 systemd[1]: Started session-72.scope - Session 72 of User core. Feb 13 20:03:13.494035 kubelet[2515]: E0213 20:03:13.493659 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:03:13.494903 containerd[1446]: time="2025-02-13T20:03:13.494722383Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 20:03:13.505106 sshd[3942]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:13.508371 systemd[1]: sshd@71-10.0.0.134:22-10.0.0.1:33914.service: Deactivated successfully. Feb 13 20:03:13.509937 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 20:03:13.511868 systemd-logind[1425]: Session 72 logged out. Waiting for processes to exit. Feb 13 20:03:13.513336 systemd-logind[1425]: Removed session 72. Feb 13 20:03:14.802944 containerd[1446]: time="2025-02-13T20:03:14.802880636Z" level=error msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:03:14.803610 containerd[1446]: time="2025-02-13T20:03:14.802961877Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=13094" Feb 13 20:03:14.803657 kubelet[2515]: E0213 20:03:14.803101 2515 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:03:14.803657 kubelet[2515]: E0213 20:03:14.803142 2515 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel:v0.22.0\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel:v0.22.0" Feb 13 20:03:14.803900 kubelet[2515]: E0213 20:03:14.803234 2515 kuberuntime_manager.go:1256] init container &Container{Name:install-cni,Image:docker.io/flannel/flannel:v0.22.0,Command:[cp],Args:[-f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flannel-cfg,ReadOnly:false,MountPath:/etc/kube-flannel/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdsq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-42jkn_kube-flannel(25643ab7-6101-403b-ac20-77f1fa4c78ee): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel:v0.22.0": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:03:14.803961 kubelet[2515]: E0213 20:03:14.803262 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel:v0.22.0\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel/manifests/sha256:de969d65cb6e570ce2ff6f069132707b6da0e6d2adb43116d43743e2c31fd773: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:03:16.620476 kubelet[2515]: E0213 20:03:16.620421 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:03:18.520424 systemd[1]: Started sshd@72-10.0.0.134:22-10.0.0.1:33928.service - OpenSSH per-connection server daemon (10.0.0.1:33928). Feb 13 20:03:18.559695 sshd[3958]: Accepted publickey for core from 10.0.0.1 port 33928 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:03:18.561154 sshd[3958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:18.565088 systemd-logind[1425]: New session 73 of user core. Feb 13 20:03:18.577237 systemd[1]: Started session-73.scope - Session 73 of User core. Feb 13 20:03:18.686066 sshd[3958]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:18.690318 systemd[1]: sshd@72-10.0.0.134:22-10.0.0.1:33928.service: Deactivated successfully. Feb 13 20:03:18.692585 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 20:03:18.693676 systemd-logind[1425]: Session 73 logged out. Waiting for processes to exit. Feb 13 20:03:18.694574 systemd-logind[1425]: Removed session 73. Feb 13 20:03:21.622162 kubelet[2515]: E0213 20:03:21.622113 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:03:23.697138 systemd[1]: Started sshd@73-10.0.0.134:22-10.0.0.1:33578.service - OpenSSH per-connection server daemon (10.0.0.1:33578). Feb 13 20:03:23.736374 sshd[3972]: Accepted publickey for core from 10.0.0.1 port 33578 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:03:23.737587 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:23.741874 systemd-logind[1425]: New session 74 of user core. Feb 13 20:03:23.755177 systemd[1]: Started session-74.scope - Session 74 of User core. Feb 13 20:03:23.861732 sshd[3972]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:23.864848 systemd[1]: sshd@73-10.0.0.134:22-10.0.0.1:33578.service: Deactivated successfully. Feb 13 20:03:23.867125 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 20:03:23.868553 systemd-logind[1425]: Session 74 logged out. Waiting for processes to exit. Feb 13 20:03:23.869341 systemd-logind[1425]: Removed session 74. Feb 13 20:03:26.494138 kubelet[2515]: E0213 20:03:26.494088 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:03:26.495317 kubelet[2515]: E0213 20:03:26.495040 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:03:26.623743 kubelet[2515]: E0213 20:03:26.623698 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:03:28.871689 systemd[1]: Started sshd@74-10.0.0.134:22-10.0.0.1:33590.service - OpenSSH per-connection server daemon (10.0.0.1:33590). Feb 13 20:03:28.911034 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 33590 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:03:28.912506 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:28.916769 systemd-logind[1425]: New session 75 of user core. Feb 13 20:03:28.932256 systemd[1]: Started session-75.scope - Session 75 of User core. Feb 13 20:03:29.048320 sshd[3987]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:29.051799 systemd[1]: sshd@74-10.0.0.134:22-10.0.0.1:33590.service: Deactivated successfully. Feb 13 20:03:29.053476 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 20:03:29.054495 systemd-logind[1425]: Session 75 logged out. Waiting for processes to exit. Feb 13 20:03:29.056258 systemd-logind[1425]: Removed session 75. Feb 13 20:03:31.625114 kubelet[2515]: E0213 20:03:31.625068 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:03:34.063650 systemd[1]: Started sshd@75-10.0.0.134:22-10.0.0.1:59200.service - OpenSSH per-connection server daemon (10.0.0.1:59200). Feb 13 20:03:34.102507 sshd[4004]: Accepted publickey for core from 10.0.0.1 port 59200 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:03:34.103716 sshd[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:34.107748 systemd-logind[1425]: New session 76 of user core. Feb 13 20:03:34.118834 systemd[1]: Started session-76.scope - Session 76 of User core. Feb 13 20:03:34.226368 sshd[4004]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:34.229560 systemd[1]: sshd@75-10.0.0.134:22-10.0.0.1:59200.service: Deactivated successfully. Feb 13 20:03:34.231210 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 20:03:34.231817 systemd-logind[1425]: Session 76 logged out. Waiting for processes to exit. Feb 13 20:03:34.232826 systemd-logind[1425]: Removed session 76. Feb 13 20:03:36.626070 kubelet[2515]: E0213 20:03:36.626011 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:03:39.236687 systemd[1]: Started sshd@76-10.0.0.134:22-10.0.0.1:59216.service - OpenSSH per-connection server daemon (10.0.0.1:59216). Feb 13 20:03:39.276281 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 59216 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:03:39.277625 sshd[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:39.281674 systemd-logind[1425]: New session 77 of user core. Feb 13 20:03:39.289195 systemd[1]: Started session-77.scope - Session 77 of User core. Feb 13 20:03:39.396428 sshd[4018]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:39.400275 systemd[1]: sshd@76-10.0.0.134:22-10.0.0.1:59216.service: Deactivated successfully. Feb 13 20:03:39.401914 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 20:03:39.403917 systemd-logind[1425]: Session 77 logged out. Waiting for processes to exit. Feb 13 20:03:39.404876 systemd-logind[1425]: Removed session 77. Feb 13 20:03:41.493914 kubelet[2515]: E0213 20:03:41.493861 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:03:41.495113 kubelet[2515]: E0213 20:03:41.494464 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:03:41.626837 kubelet[2515]: E0213 20:03:41.626797 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:03:42.493897 kubelet[2515]: E0213 20:03:42.493868 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:03:44.411129 systemd[1]: Started sshd@77-10.0.0.134:22-10.0.0.1:58008.service - OpenSSH per-connection server daemon (10.0.0.1:58008). Feb 13 20:03:44.450170 sshd[4033]: Accepted publickey for core from 10.0.0.1 port 58008 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:03:44.451424 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:44.455665 systemd-logind[1425]: New session 78 of user core. Feb 13 20:03:44.466177 systemd[1]: Started session-78.scope - Session 78 of User core. Feb 13 20:03:44.573261 sshd[4033]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:44.585618 systemd[1]: sshd@77-10.0.0.134:22-10.0.0.1:58008.service: Deactivated successfully. Feb 13 20:03:44.588319 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 20:03:44.590814 systemd-logind[1425]: Session 78 logged out. Waiting for processes to exit. Feb 13 20:03:44.598600 systemd[1]: Started sshd@78-10.0.0.134:22-10.0.0.1:58024.service - OpenSSH per-connection server daemon (10.0.0.1:58024). Feb 13 20:03:44.600079 systemd-logind[1425]: Removed session 78. Feb 13 20:03:44.633296 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 58024 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:03:44.634539 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:44.638170 systemd-logind[1425]: New session 79 of user core. Feb 13 20:03:44.650219 systemd[1]: Started session-79.scope - Session 79 of User core. Feb 13 20:03:44.825404 sshd[4047]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:44.834564 systemd[1]: sshd@78-10.0.0.134:22-10.0.0.1:58024.service: Deactivated successfully. Feb 13 20:03:44.837449 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 20:03:44.838785 systemd-logind[1425]: Session 79 logged out. Waiting for processes to exit. Feb 13 20:03:44.840104 systemd[1]: Started sshd@79-10.0.0.134:22-10.0.0.1:58028.service - OpenSSH per-connection server daemon (10.0.0.1:58028). Feb 13 20:03:44.840817 systemd-logind[1425]: Removed session 79. Feb 13 20:03:44.879030 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 58028 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:03:44.880285 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:44.884235 systemd-logind[1425]: New session 80 of user core. Feb 13 20:03:44.895202 systemd[1]: Started session-80.scope - Session 80 of User core. Feb 13 20:03:45.905338 sshd[4059]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:45.916570 systemd[1]: sshd@79-10.0.0.134:22-10.0.0.1:58028.service: Deactivated successfully. Feb 13 20:03:45.918413 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 20:03:45.919871 systemd-logind[1425]: Session 80 logged out. Waiting for processes to exit. Feb 13 20:03:45.932279 systemd[1]: Started sshd@80-10.0.0.134:22-10.0.0.1:58036.service - OpenSSH per-connection server daemon (10.0.0.1:58036). Feb 13 20:03:45.933223 systemd-logind[1425]: Removed session 80. Feb 13 20:03:45.969084 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 58036 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:03:45.970415 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:45.974770 systemd-logind[1425]: New session 81 of user core. Feb 13 20:03:45.989167 systemd[1]: Started session-81.scope - Session 81 of User core. Feb 13 20:03:46.196895 sshd[4080]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:46.206872 systemd[1]: sshd@80-10.0.0.134:22-10.0.0.1:58036.service: Deactivated successfully. Feb 13 20:03:46.208857 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 20:03:46.210289 systemd-logind[1425]: Session 81 logged out. Waiting for processes to exit. Feb 13 20:03:46.212413 systemd[1]: Started sshd@81-10.0.0.134:22-10.0.0.1:58046.service - OpenSSH per-connection server daemon (10.0.0.1:58046). Feb 13 20:03:46.213405 systemd-logind[1425]: Removed session 81. Feb 13 20:03:46.252273 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 58046 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:03:46.253576 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:46.257277 systemd-logind[1425]: New session 82 of user core. Feb 13 20:03:46.264193 systemd[1]: Started session-82.scope - Session 82 of User core. Feb 13 20:03:46.370304 sshd[4094]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:46.373799 systemd[1]: sshd@81-10.0.0.134:22-10.0.0.1:58046.service: Deactivated successfully. Feb 13 20:03:46.375482 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 20:03:46.377581 systemd-logind[1425]: Session 82 logged out. Waiting for processes to exit. Feb 13 20:03:46.378679 systemd-logind[1425]: Removed session 82. Feb 13 20:03:46.627583 kubelet[2515]: E0213 20:03:46.627537 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:03:50.494742 kubelet[2515]: E0213 20:03:50.494712 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:03:51.381930 systemd[1]: Started sshd@82-10.0.0.134:22-10.0.0.1:58056.service - OpenSSH per-connection server daemon (10.0.0.1:58056). Feb 13 20:03:51.420756 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 58056 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:03:51.421967 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:51.425461 systemd-logind[1425]: New session 83 of user core. Feb 13 20:03:51.436156 systemd[1]: Started session-83.scope - Session 83 of User core. Feb 13 20:03:51.542173 sshd[4108]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:51.545607 systemd[1]: sshd@82-10.0.0.134:22-10.0.0.1:58056.service: Deactivated successfully. Feb 13 20:03:51.548714 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 20:03:51.549437 systemd-logind[1425]: Session 83 logged out. Waiting for processes to exit. Feb 13 20:03:51.550223 systemd-logind[1425]: Removed session 83. Feb 13 20:03:51.628949 kubelet[2515]: E0213 20:03:51.628897 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:03:55.494155 kubelet[2515]: E0213 20:03:55.494126 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:03:55.494982 kubelet[2515]: E0213 20:03:55.494761 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:03:56.552548 systemd[1]: Started sshd@83-10.0.0.134:22-10.0.0.1:38732.service - OpenSSH per-connection server daemon (10.0.0.1:38732). Feb 13 20:03:56.595313 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 38732 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:03:56.596743 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:56.602349 systemd-logind[1425]: New session 84 of user core. Feb 13 20:03:56.608211 systemd[1]: Started session-84.scope - Session 84 of User core. Feb 13 20:03:56.630518 kubelet[2515]: E0213 20:03:56.630473 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:03:56.715702 sshd[4122]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:56.719859 systemd[1]: sshd@83-10.0.0.134:22-10.0.0.1:38732.service: Deactivated successfully. Feb 13 20:03:56.722411 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 20:03:56.724586 systemd-logind[1425]: Session 84 logged out. Waiting for processes to exit. Feb 13 20:03:56.725385 systemd-logind[1425]: Removed session 84. Feb 13 20:04:01.494876 kubelet[2515]: E0213 20:04:01.494415 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:04:01.631688 kubelet[2515]: E0213 20:04:01.631636 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:04:01.726681 systemd[1]: Started sshd@84-10.0.0.134:22-10.0.0.1:38748.service - OpenSSH per-connection server daemon (10.0.0.1:38748). Feb 13 20:04:01.765675 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 38748 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:04:01.766886 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:04:01.770463 systemd-logind[1425]: New session 85 of user core. Feb 13 20:04:01.785194 systemd[1]: Started session-85.scope - Session 85 of User core. Feb 13 20:04:01.894174 sshd[4137]: pam_unix(sshd:session): session closed for user core Feb 13 20:04:01.897722 systemd[1]: sshd@84-10.0.0.134:22-10.0.0.1:38748.service: Deactivated successfully. Feb 13 20:04:01.899713 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 20:04:01.900471 systemd-logind[1425]: Session 85 logged out. Waiting for processes to exit. Feb 13 20:04:01.901271 systemd-logind[1425]: Removed session 85. Feb 13 20:04:06.633160 kubelet[2515]: E0213 20:04:06.633103 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:04:06.907562 systemd[1]: Started sshd@85-10.0.0.134:22-10.0.0.1:47220.service - OpenSSH per-connection server daemon (10.0.0.1:47220). Feb 13 20:04:06.946816 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 47220 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:04:06.948331 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:04:06.953295 systemd-logind[1425]: New session 86 of user core. Feb 13 20:04:06.969192 systemd[1]: Started session-86.scope - Session 86 of User core. Feb 13 20:04:07.074260 sshd[4153]: pam_unix(sshd:session): session closed for user core Feb 13 20:04:07.077787 systemd[1]: sshd@85-10.0.0.134:22-10.0.0.1:47220.service: Deactivated successfully. Feb 13 20:04:07.079369 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 20:04:07.081223 systemd-logind[1425]: Session 86 logged out. Waiting for processes to exit. Feb 13 20:04:07.082035 systemd-logind[1425]: Removed session 86. Feb 13 20:04:09.494080 kubelet[2515]: E0213 20:04:09.494007 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:04:09.494611 kubelet[2515]: E0213 20:04:09.494565 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:04:11.634656 kubelet[2515]: E0213 20:04:11.634569 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:04:12.085688 systemd[1]: Started sshd@86-10.0.0.134:22-10.0.0.1:47232.service - OpenSSH per-connection server daemon (10.0.0.1:47232). Feb 13 20:04:12.124412 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 47232 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:04:12.125635 sshd[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:04:12.128952 systemd-logind[1425]: New session 87 of user core. Feb 13 20:04:12.134164 systemd[1]: Started session-87.scope - Session 87 of User core. Feb 13 20:04:12.240393 sshd[4167]: pam_unix(sshd:session): session closed for user core Feb 13 20:04:12.243591 systemd[1]: sshd@86-10.0.0.134:22-10.0.0.1:47232.service: Deactivated successfully. Feb 13 20:04:12.245404 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 20:04:12.246172 systemd-logind[1425]: Session 87 logged out. Waiting for processes to exit. Feb 13 20:04:12.247085 systemd-logind[1425]: Removed session 87. Feb 13 20:04:16.635854 kubelet[2515]: E0213 20:04:16.635804 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:04:17.250539 systemd[1]: Started sshd@87-10.0.0.134:22-10.0.0.1:40700.service - OpenSSH per-connection server daemon (10.0.0.1:40700). Feb 13 20:04:17.289582 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 40700 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:04:17.290741 sshd[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:04:17.294298 systemd-logind[1425]: New session 88 of user core. Feb 13 20:04:17.304159 systemd[1]: Started session-88.scope - Session 88 of User core. Feb 13 20:04:17.408533 sshd[4183]: pam_unix(sshd:session): session closed for user core Feb 13 20:04:17.411466 systemd[1]: sshd@87-10.0.0.134:22-10.0.0.1:40700.service: Deactivated successfully. Feb 13 20:04:17.413055 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 20:04:17.413619 systemd-logind[1425]: Session 88 logged out. Waiting for processes to exit. Feb 13 20:04:17.414739 systemd-logind[1425]: Removed session 88. Feb 13 20:04:20.494337 kubelet[2515]: E0213 20:04:20.494306 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:04:20.495301 kubelet[2515]: E0213 20:04:20.494977 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:04:21.637222 kubelet[2515]: E0213 20:04:21.637175 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:04:22.423650 systemd[1]: Started sshd@88-10.0.0.134:22-10.0.0.1:40712.service - OpenSSH per-connection server daemon (10.0.0.1:40712). Feb 13 20:04:22.463459 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 40712 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:04:22.464451 sshd[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:04:22.468071 systemd-logind[1425]: New session 89 of user core. Feb 13 20:04:22.478170 systemd[1]: Started session-89.scope - Session 89 of User core. Feb 13 20:04:22.494883 kubelet[2515]: E0213 20:04:22.494565 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:04:22.582683 sshd[4197]: pam_unix(sshd:session): session closed for user core Feb 13 20:04:22.585783 systemd[1]: sshd@88-10.0.0.134:22-10.0.0.1:40712.service: Deactivated successfully. Feb 13 20:04:22.587973 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 20:04:22.588871 systemd-logind[1425]: Session 89 logged out. Waiting for processes to exit. Feb 13 20:04:22.589623 systemd-logind[1425]: Removed session 89. Feb 13 20:04:26.638024 kubelet[2515]: E0213 20:04:26.637946 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:04:27.601626 systemd[1]: Started sshd@89-10.0.0.134:22-10.0.0.1:53184.service - OpenSSH per-connection server daemon (10.0.0.1:53184). Feb 13 20:04:27.674551 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 53184 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:04:27.675774 sshd[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:04:27.683032 systemd-logind[1425]: New session 90 of user core. Feb 13 20:04:27.689171 systemd[1]: Started session-90.scope - Session 90 of User core. Feb 13 20:04:27.808366 sshd[4211]: pam_unix(sshd:session): session closed for user core Feb 13 20:04:27.810936 systemd[1]: sshd@89-10.0.0.134:22-10.0.0.1:53184.service: Deactivated successfully. Feb 13 20:04:27.812516 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 20:04:27.814482 systemd-logind[1425]: Session 90 logged out. Waiting for processes to exit. Feb 13 20:04:27.815448 systemd-logind[1425]: Removed session 90. Feb 13 20:04:31.639243 kubelet[2515]: E0213 20:04:31.639178 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:04:32.818709 systemd[1]: Started sshd@90-10.0.0.134:22-10.0.0.1:40704.service - OpenSSH per-connection server daemon (10.0.0.1:40704). Feb 13 20:04:32.857666 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 40704 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:04:32.858841 sshd[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:04:32.862635 systemd-logind[1425]: New session 91 of user core. Feb 13 20:04:32.873160 systemd[1]: Started session-91.scope - Session 91 of User core. Feb 13 20:04:32.982687 sshd[4228]: pam_unix(sshd:session): session closed for user core Feb 13 20:04:32.986244 systemd[1]: sshd@90-10.0.0.134:22-10.0.0.1:40704.service: Deactivated successfully. Feb 13 20:04:32.988084 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 20:04:32.990492 systemd-logind[1425]: Session 91 logged out. Waiting for processes to exit. Feb 13 20:04:32.991261 systemd-logind[1425]: Removed session 91. Feb 13 20:04:34.494790 kubelet[2515]: E0213 20:04:34.494605 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:04:34.495403 kubelet[2515]: E0213 20:04:34.495291 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:04:36.640562 kubelet[2515]: E0213 20:04:36.640503 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:04:38.000675 systemd[1]: Started sshd@91-10.0.0.134:22-10.0.0.1:40708.service - OpenSSH per-connection server daemon (10.0.0.1:40708). Feb 13 20:04:38.039346 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 40708 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:04:38.040499 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:04:38.043860 systemd-logind[1425]: New session 92 of user core. Feb 13 20:04:38.055256 systemd[1]: Started session-92.scope - Session 92 of User core. Feb 13 20:04:38.158669 sshd[4245]: pam_unix(sshd:session): session closed for user core Feb 13 20:04:38.162045 systemd[1]: sshd@91-10.0.0.134:22-10.0.0.1:40708.service: Deactivated successfully. Feb 13 20:04:38.165583 systemd[1]: session-92.scope: Deactivated successfully. Feb 13 20:04:38.166321 systemd-logind[1425]: Session 92 logged out. Waiting for processes to exit. Feb 13 20:04:38.167041 systemd-logind[1425]: Removed session 92. Feb 13 20:04:41.641653 kubelet[2515]: E0213 20:04:41.641595 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:04:43.169822 systemd[1]: Started sshd@92-10.0.0.134:22-10.0.0.1:56332.service - OpenSSH per-connection server daemon (10.0.0.1:56332). Feb 13 20:04:43.209267 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 56332 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:04:43.210632 sshd[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:04:43.215245 systemd-logind[1425]: New session 93 of user core. Feb 13 20:04:43.222341 systemd[1]: Started session-93.scope - Session 93 of User core. Feb 13 20:04:43.336819 sshd[4260]: pam_unix(sshd:session): session closed for user core Feb 13 20:04:43.340639 systemd[1]: sshd@92-10.0.0.134:22-10.0.0.1:56332.service: Deactivated successfully. Feb 13 20:04:43.342348 systemd[1]: session-93.scope: Deactivated successfully. Feb 13 20:04:43.344183 systemd-logind[1425]: Session 93 logged out. Waiting for processes to exit. Feb 13 20:04:43.346073 systemd-logind[1425]: Removed session 93. Feb 13 20:04:46.494749 kubelet[2515]: E0213 20:04:46.494574 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:04:46.495263 kubelet[2515]: E0213 20:04:46.495208 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:04:46.642197 kubelet[2515]: E0213 20:04:46.642164 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:04:48.347763 systemd[1]: Started sshd@93-10.0.0.134:22-10.0.0.1:56338.service - OpenSSH per-connection server daemon (10.0.0.1:56338). Feb 13 20:04:48.387237 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 56338 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:04:48.388490 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:04:48.392516 systemd-logind[1425]: New session 94 of user core. Feb 13 20:04:48.404173 systemd[1]: Started session-94.scope - Session 94 of User core. Feb 13 20:04:48.508313 sshd[4275]: pam_unix(sshd:session): session closed for user core Feb 13 20:04:48.512128 systemd[1]: sshd@93-10.0.0.134:22-10.0.0.1:56338.service: Deactivated successfully. Feb 13 20:04:48.513879 systemd[1]: session-94.scope: Deactivated successfully. Feb 13 20:04:48.514573 systemd-logind[1425]: Session 94 logged out. Waiting for processes to exit. Feb 13 20:04:48.515556 systemd-logind[1425]: Removed session 94. Feb 13 20:04:51.643313 kubelet[2515]: E0213 20:04:51.643269 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:04:53.518767 systemd[1]: Started sshd@94-10.0.0.134:22-10.0.0.1:37686.service - OpenSSH per-connection server daemon (10.0.0.1:37686). Feb 13 20:04:53.558864 sshd[4291]: Accepted publickey for core from 10.0.0.1 port 37686 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:04:53.560132 sshd[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:04:53.564079 systemd-logind[1425]: New session 95 of user core. Feb 13 20:04:53.570169 systemd[1]: Started session-95.scope - Session 95 of User core. Feb 13 20:04:53.673952 sshd[4291]: pam_unix(sshd:session): session closed for user core Feb 13 20:04:53.677450 systemd[1]: sshd@94-10.0.0.134:22-10.0.0.1:37686.service: Deactivated successfully. Feb 13 20:04:53.679214 systemd[1]: session-95.scope: Deactivated successfully. Feb 13 20:04:53.679830 systemd-logind[1425]: Session 95 logged out. Waiting for processes to exit. Feb 13 20:04:53.680603 systemd-logind[1425]: Removed session 95. Feb 13 20:04:56.644434 kubelet[2515]: E0213 20:04:56.644399 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:04:58.684647 systemd[1]: Started sshd@95-10.0.0.134:22-10.0.0.1:37688.service - OpenSSH per-connection server daemon (10.0.0.1:37688). Feb 13 20:04:58.724483 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 37688 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:04:58.725631 sshd[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:04:58.729648 systemd-logind[1425]: New session 96 of user core. Feb 13 20:04:58.736238 systemd[1]: Started session-96.scope - Session 96 of User core. Feb 13 20:04:58.839845 sshd[4306]: pam_unix(sshd:session): session closed for user core Feb 13 20:04:58.843284 systemd[1]: sshd@95-10.0.0.134:22-10.0.0.1:37688.service: Deactivated successfully. Feb 13 20:04:58.845274 systemd[1]: session-96.scope: Deactivated successfully. Feb 13 20:04:58.847517 systemd-logind[1425]: Session 96 logged out. Waiting for processes to exit. Feb 13 20:04:58.848484 systemd-logind[1425]: Removed session 96. Feb 13 20:05:00.494500 kubelet[2515]: E0213 20:05:00.494316 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:00.495011 kubelet[2515]: E0213 20:05:00.494970 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:05:01.645504 kubelet[2515]: E0213 20:05:01.645459 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:05:03.850756 systemd[1]: Started sshd@96-10.0.0.134:22-10.0.0.1:36340.service - OpenSSH per-connection server daemon (10.0.0.1:36340). Feb 13 20:05:03.889689 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 36340 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:05:03.890869 sshd[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:03.894506 systemd-logind[1425]: New session 97 of user core. Feb 13 20:05:03.905282 systemd[1]: Started session-97.scope - Session 97 of User core. Feb 13 20:05:04.008135 sshd[4324]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:04.011560 systemd[1]: sshd@96-10.0.0.134:22-10.0.0.1:36340.service: Deactivated successfully. Feb 13 20:05:04.014251 systemd[1]: session-97.scope: Deactivated successfully. Feb 13 20:05:04.015041 systemd-logind[1425]: Session 97 logged out. Waiting for processes to exit. Feb 13 20:05:04.015795 systemd-logind[1425]: Removed session 97. Feb 13 20:05:06.646422 kubelet[2515]: E0213 20:05:06.646373 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:05:09.018756 systemd[1]: Started sshd@97-10.0.0.134:22-10.0.0.1:36342.service - OpenSSH per-connection server daemon (10.0.0.1:36342). Feb 13 20:05:09.057734 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 36342 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:05:09.058862 sshd[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:09.062623 systemd-logind[1425]: New session 98 of user core. Feb 13 20:05:09.072245 systemd[1]: Started session-98.scope - Session 98 of User core. Feb 13 20:05:09.174589 sshd[4338]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:09.177831 systemd[1]: sshd@97-10.0.0.134:22-10.0.0.1:36342.service: Deactivated successfully. Feb 13 20:05:09.179884 systemd[1]: session-98.scope: Deactivated successfully. Feb 13 20:05:09.180811 systemd-logind[1425]: Session 98 logged out. Waiting for processes to exit. Feb 13 20:05:09.181623 systemd-logind[1425]: Removed session 98. Feb 13 20:05:09.494033 kubelet[2515]: E0213 20:05:09.493926 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:10.494671 kubelet[2515]: E0213 20:05:10.494640 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:11.493717 kubelet[2515]: E0213 20:05:11.493678 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:11.647518 kubelet[2515]: E0213 20:05:11.647442 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:05:13.494245 kubelet[2515]: E0213 20:05:13.494204 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:13.494816 kubelet[2515]: E0213 20:05:13.494789 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:05:14.188347 systemd[1]: Started sshd@98-10.0.0.134:22-10.0.0.1:46358.service - OpenSSH per-connection server daemon (10.0.0.1:46358). Feb 13 20:05:14.227191 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 46358 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:05:14.228390 sshd[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:14.232380 systemd-logind[1425]: New session 99 of user core. Feb 13 20:05:14.239145 systemd[1]: Started session-99.scope - Session 99 of User core. Feb 13 20:05:14.340244 sshd[4352]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:14.343847 systemd[1]: sshd@98-10.0.0.134:22-10.0.0.1:46358.service: Deactivated successfully. Feb 13 20:05:14.345481 systemd[1]: session-99.scope: Deactivated successfully. Feb 13 20:05:14.346205 systemd-logind[1425]: Session 99 logged out. Waiting for processes to exit. Feb 13 20:05:14.347000 systemd-logind[1425]: Removed session 99. Feb 13 20:05:16.649746 kubelet[2515]: E0213 20:05:16.649691 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:05:19.350682 systemd[1]: Started sshd@99-10.0.0.134:22-10.0.0.1:46372.service - OpenSSH per-connection server daemon (10.0.0.1:46372). Feb 13 20:05:19.389909 sshd[4368]: Accepted publickey for core from 10.0.0.1 port 46372 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:05:19.391115 sshd[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:19.394479 systemd-logind[1425]: New session 100 of user core. Feb 13 20:05:19.408227 systemd[1]: Started session-100.scope - Session 100 of User core. Feb 13 20:05:19.513650 sshd[4368]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:19.516671 systemd[1]: sshd@99-10.0.0.134:22-10.0.0.1:46372.service: Deactivated successfully. Feb 13 20:05:19.518181 systemd[1]: session-100.scope: Deactivated successfully. Feb 13 20:05:19.519572 systemd-logind[1425]: Session 100 logged out. Waiting for processes to exit. Feb 13 20:05:19.520318 systemd-logind[1425]: Removed session 100. Feb 13 20:05:21.650673 kubelet[2515]: E0213 20:05:21.650600 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:05:24.494396 kubelet[2515]: E0213 20:05:24.494223 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:24.494922 kubelet[2515]: E0213 20:05:24.494708 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:05:24.524612 systemd[1]: Started sshd@100-10.0.0.134:22-10.0.0.1:40970.service - OpenSSH per-connection server daemon (10.0.0.1:40970). Feb 13 20:05:24.563481 sshd[4383]: Accepted publickey for core from 10.0.0.1 port 40970 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:05:24.564674 sshd[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:24.568704 systemd-logind[1425]: New session 101 of user core. Feb 13 20:05:24.579169 systemd[1]: Started session-101.scope - Session 101 of User core. Feb 13 20:05:24.682485 sshd[4383]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:24.685597 systemd[1]: sshd@100-10.0.0.134:22-10.0.0.1:40970.service: Deactivated successfully. Feb 13 20:05:24.687354 systemd[1]: session-101.scope: Deactivated successfully. Feb 13 20:05:24.688008 systemd-logind[1425]: Session 101 logged out. Waiting for processes to exit. Feb 13 20:05:24.689128 systemd-logind[1425]: Removed session 101. Feb 13 20:05:26.652289 kubelet[2515]: E0213 20:05:26.652247 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:05:27.494414 kubelet[2515]: E0213 20:05:27.494378 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:29.692664 systemd[1]: Started sshd@101-10.0.0.134:22-10.0.0.1:40982.service - OpenSSH per-connection server daemon (10.0.0.1:40982). Feb 13 20:05:29.732079 sshd[4398]: Accepted publickey for core from 10.0.0.1 port 40982 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:05:29.732752 sshd[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:29.736670 systemd-logind[1425]: New session 102 of user core. Feb 13 20:05:29.742167 systemd[1]: Started session-102.scope - Session 102 of User core. Feb 13 20:05:29.845458 sshd[4398]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:29.848904 systemd[1]: sshd@101-10.0.0.134:22-10.0.0.1:40982.service: Deactivated successfully. Feb 13 20:05:29.850715 systemd[1]: session-102.scope: Deactivated successfully. Feb 13 20:05:29.852165 systemd-logind[1425]: Session 102 logged out. Waiting for processes to exit. Feb 13 20:05:29.853057 systemd-logind[1425]: Removed session 102. Feb 13 20:05:31.653983 kubelet[2515]: E0213 20:05:31.653930 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:05:34.856615 systemd[1]: Started sshd@102-10.0.0.134:22-10.0.0.1:60552.service - OpenSSH per-connection server daemon (10.0.0.1:60552). Feb 13 20:05:34.896378 sshd[4414]: Accepted publickey for core from 10.0.0.1 port 60552 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:05:34.897599 sshd[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:34.901148 systemd-logind[1425]: New session 103 of user core. Feb 13 20:05:34.913180 systemd[1]: Started session-103.scope - Session 103 of User core. Feb 13 20:05:35.018274 sshd[4414]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:35.021769 systemd[1]: sshd@102-10.0.0.134:22-10.0.0.1:60552.service: Deactivated successfully. Feb 13 20:05:35.023375 systemd[1]: session-103.scope: Deactivated successfully. Feb 13 20:05:35.024003 systemd-logind[1425]: Session 103 logged out. Waiting for processes to exit. Feb 13 20:05:35.026511 systemd-logind[1425]: Removed session 103. Feb 13 20:05:36.493841 kubelet[2515]: E0213 20:05:36.493642 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:36.494403 kubelet[2515]: E0213 20:05:36.494344 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:05:36.654822 kubelet[2515]: E0213 20:05:36.654783 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:05:40.029620 systemd[1]: Started sshd@103-10.0.0.134:22-10.0.0.1:60560.service - OpenSSH per-connection server daemon (10.0.0.1:60560). Feb 13 20:05:40.068196 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 60560 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:05:40.069377 sshd[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:40.072798 systemd-logind[1425]: New session 104 of user core. Feb 13 20:05:40.082174 systemd[1]: Started session-104.scope - Session 104 of User core. Feb 13 20:05:40.183189 sshd[4428]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:40.186551 systemd[1]: sshd@103-10.0.0.134:22-10.0.0.1:60560.service: Deactivated successfully. Feb 13 20:05:40.188202 systemd[1]: session-104.scope: Deactivated successfully. Feb 13 20:05:40.188774 systemd-logind[1425]: Session 104 logged out. Waiting for processes to exit. Feb 13 20:05:40.189500 systemd-logind[1425]: Removed session 104. Feb 13 20:05:41.656508 kubelet[2515]: E0213 20:05:41.656415 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:05:45.193615 systemd[1]: Started sshd@104-10.0.0.134:22-10.0.0.1:40002.service - OpenSSH per-connection server daemon (10.0.0.1:40002). Feb 13 20:05:45.232360 sshd[4443]: Accepted publickey for core from 10.0.0.1 port 40002 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:05:45.233542 sshd[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:45.237159 systemd-logind[1425]: New session 105 of user core. Feb 13 20:05:45.248236 systemd[1]: Started session-105.scope - Session 105 of User core. Feb 13 20:05:45.348784 sshd[4443]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:45.351357 systemd-logind[1425]: Session 105 logged out. Waiting for processes to exit. Feb 13 20:05:45.351505 systemd[1]: sshd@104-10.0.0.134:22-10.0.0.1:40002.service: Deactivated successfully. Feb 13 20:05:45.352961 systemd[1]: session-105.scope: Deactivated successfully. Feb 13 20:05:45.354408 systemd-logind[1425]: Removed session 105. Feb 13 20:05:46.657907 kubelet[2515]: E0213 20:05:46.657861 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:05:50.363648 systemd[1]: Started sshd@105-10.0.0.134:22-10.0.0.1:40004.service - OpenSSH per-connection server daemon (10.0.0.1:40004). Feb 13 20:05:50.402430 sshd[4457]: Accepted publickey for core from 10.0.0.1 port 40004 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:05:50.403636 sshd[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:50.407361 systemd-logind[1425]: New session 106 of user core. Feb 13 20:05:50.417161 systemd[1]: Started session-106.scope - Session 106 of User core. Feb 13 20:05:50.521480 sshd[4457]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:50.524612 systemd[1]: sshd@105-10.0.0.134:22-10.0.0.1:40004.service: Deactivated successfully. Feb 13 20:05:50.527365 systemd[1]: session-106.scope: Deactivated successfully. Feb 13 20:05:50.528484 systemd-logind[1425]: Session 106 logged out. Waiting for processes to exit. Feb 13 20:05:50.529320 systemd-logind[1425]: Removed session 106. Feb 13 20:05:51.493575 kubelet[2515]: E0213 20:05:51.493521 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:05:51.494344 kubelet[2515]: E0213 20:05:51.494132 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:05:51.659404 kubelet[2515]: E0213 20:05:51.659354 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:05:55.533535 systemd[1]: Started sshd@106-10.0.0.134:22-10.0.0.1:36582.service - OpenSSH per-connection server daemon (10.0.0.1:36582). Feb 13 20:05:55.573995 sshd[4471]: Accepted publickey for core from 10.0.0.1 port 36582 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:05:55.574514 sshd[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:05:55.578181 systemd-logind[1425]: New session 107 of user core. Feb 13 20:05:55.592231 systemd[1]: Started session-107.scope - Session 107 of User core. Feb 13 20:05:55.697782 sshd[4471]: pam_unix(sshd:session): session closed for user core Feb 13 20:05:55.701024 systemd-logind[1425]: Session 107 logged out. Waiting for processes to exit. Feb 13 20:05:55.701331 systemd[1]: sshd@106-10.0.0.134:22-10.0.0.1:36582.service: Deactivated successfully. Feb 13 20:05:55.702973 systemd[1]: session-107.scope: Deactivated successfully. Feb 13 20:05:55.703798 systemd-logind[1425]: Removed session 107. Feb 13 20:05:56.660442 kubelet[2515]: E0213 20:05:56.660359 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:06:00.708656 systemd[1]: Started sshd@107-10.0.0.134:22-10.0.0.1:36594.service - OpenSSH per-connection server daemon (10.0.0.1:36594). Feb 13 20:06:00.747791 sshd[4486]: Accepted publickey for core from 10.0.0.1 port 36594 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:00.748935 sshd[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:00.752576 systemd-logind[1425]: New session 108 of user core. Feb 13 20:06:00.764161 systemd[1]: Started session-108.scope - Session 108 of User core. Feb 13 20:06:00.863723 sshd[4486]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:00.866981 systemd[1]: sshd@107-10.0.0.134:22-10.0.0.1:36594.service: Deactivated successfully. Feb 13 20:06:00.868657 systemd[1]: session-108.scope: Deactivated successfully. Feb 13 20:06:00.869256 systemd-logind[1425]: Session 108 logged out. Waiting for processes to exit. Feb 13 20:06:00.870121 systemd-logind[1425]: Removed session 108. Feb 13 20:06:01.661067 kubelet[2515]: E0213 20:06:01.661012 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:06:05.877779 systemd[1]: Started sshd@108-10.0.0.134:22-10.0.0.1:52638.service - OpenSSH per-connection server daemon (10.0.0.1:52638). Feb 13 20:06:05.917945 sshd[4502]: Accepted publickey for core from 10.0.0.1 port 52638 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:05.919184 sshd[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:05.923032 systemd-logind[1425]: New session 109 of user core. Feb 13 20:06:05.939213 systemd[1]: Started session-109.scope - Session 109 of User core. Feb 13 20:06:06.042901 sshd[4502]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:06.046197 systemd[1]: sshd@108-10.0.0.134:22-10.0.0.1:52638.service: Deactivated successfully. Feb 13 20:06:06.047958 systemd[1]: session-109.scope: Deactivated successfully. Feb 13 20:06:06.049635 systemd-logind[1425]: Session 109 logged out. Waiting for processes to exit. Feb 13 20:06:06.050581 systemd-logind[1425]: Removed session 109. Feb 13 20:06:06.493639 kubelet[2515]: E0213 20:06:06.493607 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:06.494473 kubelet[2515]: E0213 20:06:06.494244 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:06:06.662471 kubelet[2515]: E0213 20:06:06.662427 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:06:11.053758 systemd[1]: Started sshd@109-10.0.0.134:22-10.0.0.1:52654.service - OpenSSH per-connection server daemon (10.0.0.1:52654). Feb 13 20:06:11.095841 sshd[4516]: Accepted publickey for core from 10.0.0.1 port 52654 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:11.097045 sshd[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:11.100800 systemd-logind[1425]: New session 110 of user core. Feb 13 20:06:11.113180 systemd[1]: Started session-110.scope - Session 110 of User core. Feb 13 20:06:11.216793 sshd[4516]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:11.219879 systemd[1]: sshd@109-10.0.0.134:22-10.0.0.1:52654.service: Deactivated successfully. Feb 13 20:06:11.222590 systemd[1]: session-110.scope: Deactivated successfully. Feb 13 20:06:11.224715 systemd-logind[1425]: Session 110 logged out. Waiting for processes to exit. Feb 13 20:06:11.225552 systemd-logind[1425]: Removed session 110. Feb 13 20:06:11.663777 kubelet[2515]: E0213 20:06:11.663736 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:06:14.494729 kubelet[2515]: E0213 20:06:14.494688 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:16.227484 systemd[1]: Started sshd@110-10.0.0.134:22-10.0.0.1:36054.service - OpenSSH per-connection server daemon (10.0.0.1:36054). Feb 13 20:06:16.266539 sshd[4530]: Accepted publickey for core from 10.0.0.1 port 36054 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:16.267743 sshd[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:16.271229 systemd-logind[1425]: New session 111 of user core. Feb 13 20:06:16.285178 systemd[1]: Started session-111.scope - Session 111 of User core. Feb 13 20:06:16.388702 sshd[4530]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:16.391757 systemd[1]: sshd@110-10.0.0.134:22-10.0.0.1:36054.service: Deactivated successfully. Feb 13 20:06:16.393347 systemd[1]: session-111.scope: Deactivated successfully. Feb 13 20:06:16.393890 systemd-logind[1425]: Session 111 logged out. Waiting for processes to exit. Feb 13 20:06:16.394660 systemd-logind[1425]: Removed session 111. Feb 13 20:06:16.665625 kubelet[2515]: E0213 20:06:16.665448 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:06:18.105424 update_engine[1428]: I20250213 20:06:18.105355 1428 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 20:06:18.105424 update_engine[1428]: I20250213 20:06:18.105412 1428 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 20:06:18.105781 update_engine[1428]: I20250213 20:06:18.105651 1428 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 20:06:18.106141 update_engine[1428]: I20250213 20:06:18.106040 1428 omaha_request_params.cc:62] Current group set to lts Feb 13 20:06:18.106141 update_engine[1428]: I20250213 20:06:18.106137 1428 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 20:06:18.106220 update_engine[1428]: I20250213 20:06:18.106148 1428 update_attempter.cc:643] Scheduling an action processor start. Feb 13 20:06:18.106220 update_engine[1428]: I20250213 20:06:18.106164 1428 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:06:18.106220 update_engine[1428]: I20250213 20:06:18.106191 1428 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 20:06:18.106292 update_engine[1428]: I20250213 20:06:18.106238 1428 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:06:18.106292 update_engine[1428]: I20250213 20:06:18.106246 1428 omaha_request_action.cc:272] Request: Feb 13 20:06:18.106292 update_engine[1428]: Feb 13 20:06:18.106292 update_engine[1428]: Feb 13 20:06:18.106292 update_engine[1428]: Feb 13 20:06:18.106292 update_engine[1428]: Feb 13 20:06:18.106292 update_engine[1428]: Feb 13 20:06:18.106292 update_engine[1428]: Feb 13 20:06:18.106292 update_engine[1428]: Feb 13 20:06:18.106292 update_engine[1428]: Feb 13 20:06:18.106292 update_engine[1428]: I20250213 20:06:18.106252 1428 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:06:18.106485 locksmithd[1465]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 20:06:18.109886 update_engine[1428]: I20250213 20:06:18.109850 1428 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:06:18.110158 update_engine[1428]: I20250213 20:06:18.110118 1428 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:06:18.114673 update_engine[1428]: E20250213 20:06:18.114624 1428 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:06:18.114755 update_engine[1428]: I20250213 20:06:18.114703 1428 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 20:06:19.494495 kubelet[2515]: E0213 20:06:19.494311 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:19.494937 kubelet[2515]: E0213 20:06:19.494869 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:06:21.404145 systemd[1]: Started sshd@111-10.0.0.134:22-10.0.0.1:36062.service - OpenSSH per-connection server daemon (10.0.0.1:36062). Feb 13 20:06:21.443208 sshd[4546]: Accepted publickey for core from 10.0.0.1 port 36062 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:21.444449 sshd[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:21.448110 systemd-logind[1425]: New session 112 of user core. Feb 13 20:06:21.458191 systemd[1]: Started session-112.scope - Session 112 of User core. Feb 13 20:06:21.559699 sshd[4546]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:21.562909 systemd[1]: sshd@111-10.0.0.134:22-10.0.0.1:36062.service: Deactivated successfully. Feb 13 20:06:21.566178 systemd[1]: session-112.scope: Deactivated successfully. Feb 13 20:06:21.566776 systemd-logind[1425]: Session 112 logged out. Waiting for processes to exit. Feb 13 20:06:21.567538 systemd-logind[1425]: Removed session 112. Feb 13 20:06:21.666869 kubelet[2515]: E0213 20:06:21.666744 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:06:26.570587 systemd[1]: Started sshd@112-10.0.0.134:22-10.0.0.1:59738.service - OpenSSH per-connection server daemon (10.0.0.1:59738). Feb 13 20:06:26.609116 sshd[4560]: Accepted publickey for core from 10.0.0.1 port 59738 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:26.610273 sshd[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:26.613988 systemd-logind[1425]: New session 113 of user core. Feb 13 20:06:26.626265 systemd[1]: Started session-113.scope - Session 113 of User core. Feb 13 20:06:26.667546 kubelet[2515]: E0213 20:06:26.667494 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:06:26.727141 sshd[4560]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:26.730260 systemd[1]: sshd@112-10.0.0.134:22-10.0.0.1:59738.service: Deactivated successfully. Feb 13 20:06:26.732736 systemd[1]: session-113.scope: Deactivated successfully. Feb 13 20:06:26.733484 systemd-logind[1425]: Session 113 logged out. Waiting for processes to exit. Feb 13 20:06:26.734335 systemd-logind[1425]: Removed session 113. Feb 13 20:06:28.104717 update_engine[1428]: I20250213 20:06:28.104604 1428 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:06:28.105084 update_engine[1428]: I20250213 20:06:28.104903 1428 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:06:28.105126 update_engine[1428]: I20250213 20:06:28.105090 1428 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:06:28.108735 update_engine[1428]: E20250213 20:06:28.108698 1428 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:06:28.108782 update_engine[1428]: I20250213 20:06:28.108755 1428 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 20:06:31.668213 kubelet[2515]: E0213 20:06:31.668161 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:06:31.737564 systemd[1]: Started sshd@113-10.0.0.134:22-10.0.0.1:59750.service - OpenSSH per-connection server daemon (10.0.0.1:59750). Feb 13 20:06:31.776603 sshd[4574]: Accepted publickey for core from 10.0.0.1 port 59750 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:31.777741 sshd[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:31.781466 systemd-logind[1425]: New session 114 of user core. Feb 13 20:06:31.793224 systemd[1]: Started session-114.scope - Session 114 of User core. Feb 13 20:06:31.893417 sshd[4574]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:31.895934 systemd[1]: sshd@113-10.0.0.134:22-10.0.0.1:59750.service: Deactivated successfully. Feb 13 20:06:31.897458 systemd[1]: session-114.scope: Deactivated successfully. Feb 13 20:06:31.898633 systemd-logind[1425]: Session 114 logged out. Waiting for processes to exit. Feb 13 20:06:31.899449 systemd-logind[1425]: Removed session 114. Feb 13 20:06:33.494243 kubelet[2515]: E0213 20:06:33.494144 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:33.494691 kubelet[2515]: E0213 20:06:33.494489 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:33.495032 kubelet[2515]: E0213 20:06:33.494996 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:06:36.494057 kubelet[2515]: E0213 20:06:36.493863 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:36.668839 kubelet[2515]: E0213 20:06:36.668806 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:06:36.904545 systemd[1]: Started sshd@114-10.0.0.134:22-10.0.0.1:56366.service - OpenSSH per-connection server daemon (10.0.0.1:56366). Feb 13 20:06:36.943212 sshd[4591]: Accepted publickey for core from 10.0.0.1 port 56366 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:36.944409 sshd[4591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:36.948403 systemd-logind[1425]: New session 115 of user core. Feb 13 20:06:36.954160 systemd[1]: Started session-115.scope - Session 115 of User core. Feb 13 20:06:37.054819 sshd[4591]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:37.057822 systemd[1]: sshd@114-10.0.0.134:22-10.0.0.1:56366.service: Deactivated successfully. Feb 13 20:06:37.059665 systemd[1]: session-115.scope: Deactivated successfully. Feb 13 20:06:37.061508 systemd-logind[1425]: Session 115 logged out. Waiting for processes to exit. Feb 13 20:06:37.062339 systemd-logind[1425]: Removed session 115. Feb 13 20:06:37.493762 kubelet[2515]: E0213 20:06:37.493696 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:38.105071 update_engine[1428]: I20250213 20:06:38.104725 1428 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:06:38.105071 update_engine[1428]: I20250213 20:06:38.105010 1428 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:06:38.105413 update_engine[1428]: I20250213 20:06:38.105208 1428 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:06:38.113238 update_engine[1428]: E20250213 20:06:38.113190 1428 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:06:38.113291 update_engine[1428]: I20250213 20:06:38.113242 1428 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 20:06:41.669784 kubelet[2515]: E0213 20:06:41.669678 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:06:42.065860 systemd[1]: Started sshd@115-10.0.0.134:22-10.0.0.1:56382.service - OpenSSH per-connection server daemon (10.0.0.1:56382). Feb 13 20:06:42.104421 sshd[4606]: Accepted publickey for core from 10.0.0.1 port 56382 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:42.105569 sshd[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:42.109006 systemd-logind[1425]: New session 116 of user core. Feb 13 20:06:42.118174 systemd[1]: Started session-116.scope - Session 116 of User core. Feb 13 20:06:42.219685 sshd[4606]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:42.223108 systemd[1]: sshd@115-10.0.0.134:22-10.0.0.1:56382.service: Deactivated successfully. Feb 13 20:06:42.225589 systemd[1]: session-116.scope: Deactivated successfully. Feb 13 20:06:42.226431 systemd-logind[1425]: Session 116 logged out. Waiting for processes to exit. Feb 13 20:06:42.227301 systemd-logind[1425]: Removed session 116. Feb 13 20:06:46.493608 kubelet[2515]: E0213 20:06:46.493498 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:06:46.494105 kubelet[2515]: E0213 20:06:46.494062 2515 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel:v0.22.0\\\"\"" pod="kube-flannel/kube-flannel-ds-42jkn" podUID="25643ab7-6101-403b-ac20-77f1fa4c78ee" Feb 13 20:06:46.670365 kubelet[2515]: E0213 20:06:46.670283 2515 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:06:47.230600 systemd[1]: Started sshd@116-10.0.0.134:22-10.0.0.1:53594.service - OpenSSH per-connection server daemon (10.0.0.1:53594). Feb 13 20:06:47.269201 sshd[4620]: Accepted publickey for core from 10.0.0.1 port 53594 ssh2: RSA SHA256:JGaeIbjf5IUSNUg1jnjkSVnSyX1OvNbTOClTMYH5eIk Feb 13 20:06:47.270423 sshd[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:06:47.273692 systemd-logind[1425]: New session 117 of user core. Feb 13 20:06:47.289166 systemd[1]: Started session-117.scope - Session 117 of User core. Feb 13 20:06:47.390826 sshd[4620]: pam_unix(sshd:session): session closed for user core Feb 13 20:06:47.393919 systemd[1]: sshd@116-10.0.0.134:22-10.0.0.1:53594.service: Deactivated successfully. Feb 13 20:06:47.396291 systemd[1]: session-117.scope: Deactivated successfully. Feb 13 20:06:47.397312 systemd-logind[1425]: Session 117 logged out. Waiting for processes to exit. Feb 13 20:06:47.398175 systemd-logind[1425]: Removed session 117. Feb 13 20:06:48.104601 update_engine[1428]: I20250213 20:06:48.104465 1428 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:06:48.104907 update_engine[1428]: I20250213 20:06:48.104707 1428 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:06:48.104907 update_engine[1428]: I20250213 20:06:48.104859 1428 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:06:48.111245 update_engine[1428]: E20250213 20:06:48.111207 1428 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:06:48.111304 update_engine[1428]: I20250213 20:06:48.111256 1428 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:06:48.111304 update_engine[1428]: I20250213 20:06:48.111265 1428 omaha_request_action.cc:617] Omaha request response: Feb 13 20:06:48.111343 update_engine[1428]: E20250213 20:06:48.111333 1428 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 20:06:48.111361 update_engine[1428]: I20250213 20:06:48.111351 1428 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 20:06:48.111361 update_engine[1428]: I20250213 20:06:48.111356 1428 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:06:48.111405 update_engine[1428]: I20250213 20:06:48.111361 1428 update_attempter.cc:306] Processing Done. Feb 13 20:06:48.111405 update_engine[1428]: E20250213 20:06:48.111373 1428 update_attempter.cc:619] Update failed. Feb 13 20:06:48.111405 update_engine[1428]: I20250213 20:06:48.111378 1428 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 20:06:48.111405 update_engine[1428]: I20250213 20:06:48.111383 1428 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 20:06:48.111405 update_engine[1428]: I20250213 20:06:48.111388 1428 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 20:06:48.111496 update_engine[1428]: I20250213 20:06:48.111451 1428 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:06:48.111496 update_engine[1428]: I20250213 20:06:48.111470 1428 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:06:48.111496 update_engine[1428]: I20250213 20:06:48.111475 1428 omaha_request_action.cc:272] Request: Feb 13 20:06:48.111496 update_engine[1428]: Feb 13 20:06:48.111496 update_engine[1428]: Feb 13 20:06:48.111496 update_engine[1428]: Feb 13 20:06:48.111496 update_engine[1428]: Feb 13 20:06:48.111496 update_engine[1428]: Feb 13 20:06:48.111496 update_engine[1428]: Feb 13 20:06:48.111496 update_engine[1428]: I20250213 20:06:48.111480 1428 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:06:48.111686 update_engine[1428]: I20250213 20:06:48.111615 1428 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:06:48.111773 update_engine[1428]: I20250213 20:06:48.111737 1428 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:06:48.111861 locksmithd[1465]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 20:06:48.153135 update_engine[1428]: E20250213 20:06:48.153098 1428 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:06:48.153184 update_engine[1428]: I20250213 20:06:48.153144 1428 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:06:48.153184 update_engine[1428]: I20250213 20:06:48.153152 1428 omaha_request_action.cc:617] Omaha request response: Feb 13 20:06:48.153184 update_engine[1428]: I20250213 20:06:48.153157 1428 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:06:48.153184 update_engine[1428]: I20250213 20:06:48.153162 1428 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:06:48.153184 update_engine[1428]: I20250213 20:06:48.153167 1428 update_attempter.cc:306] Processing Done. Feb 13 20:06:48.153184 update_engine[1428]: I20250213 20:06:48.153172 1428 update_attempter.cc:310] Error event sent. Feb 13 20:06:48.153184 update_engine[1428]: I20250213 20:06:48.153179 1428 update_check_scheduler.cc:74] Next update check in 48m58s Feb 13 20:06:48.153409 locksmithd[1465]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0