Feb 13 20:38:57.962206 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:38:57.962236 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 20:38:57.962246 kernel: KASLR enabled Feb 13 20:38:57.962251 kernel: efi: EFI v2.7 by EDK II Feb 13 20:38:57.962257 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 20:38:57.962263 kernel: random: crng init done Feb 13 20:38:57.962270 kernel: ACPI: Early table checksum verification disabled Feb 13 20:38:57.962276 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 20:38:57.962282 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 20:38:57.962290 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.962296 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.962302 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.962308 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.962314 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.962321 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.962329 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.962336 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.962342 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.962348 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 20:38:57.962355 kernel: NUMA: Failed to initialise from firmware Feb 13 20:38:57.962361 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:38:57.962368 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 20:38:57.962374 kernel: Zone ranges: Feb 13 20:38:57.962380 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:38:57.962390 kernel: DMA32 empty Feb 13 20:38:57.962398 kernel: Normal empty Feb 13 20:38:57.962404 kernel: Movable zone start for each node Feb 13 20:38:57.962410 kernel: Early memory node ranges Feb 13 20:38:57.962417 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 20:38:57.962423 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 20:38:57.962429 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 20:38:57.962436 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 20:38:57.962442 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 20:38:57.962448 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 20:38:57.962454 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 20:38:57.962461 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:38:57.962467 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 20:38:57.962475 kernel: psci: probing for conduit method from ACPI. Feb 13 20:38:57.962481 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:38:57.962488 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:38:57.962497 kernel: psci: Trusted OS migration not required Feb 13 20:38:57.962504 kernel: psci: SMC Calling Convention v1.1 Feb 13 20:38:57.962511 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 20:38:57.962519 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:38:57.962526 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:38:57.962532 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 20:38:57.962539 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:38:57.962546 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:38:57.962553 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:38:57.962559 kernel: CPU features: detected: Spectre-v4 Feb 13 20:38:57.962566 kernel: CPU features: detected: Spectre-BHB Feb 13 20:38:57.962572 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:38:57.962580 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:38:57.962588 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:38:57.962594 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:38:57.962601 kernel: alternatives: applying boot alternatives Feb 13 20:38:57.962609 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:38:57.962616 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:38:57.962623 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:38:57.962630 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:38:57.962636 kernel: Fallback order for Node 0: 0 Feb 13 20:38:57.962655 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 20:38:57.962661 kernel: Policy zone: DMA Feb 13 20:38:57.962668 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:38:57.962676 kernel: software IO TLB: area num 4. Feb 13 20:38:57.962683 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 20:38:57.962690 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Feb 13 20:38:57.962697 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 20:38:57.962703 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:38:57.962711 kernel: rcu: RCU event tracing is enabled. Feb 13 20:38:57.962718 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 20:38:57.962725 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:38:57.962732 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:38:57.962742 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:38:57.962749 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 20:38:57.962756 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:38:57.962764 kernel: GICv3: 256 SPIs implemented Feb 13 20:38:57.962771 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:38:57.962777 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:38:57.962784 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:38:57.962791 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 20:38:57.962797 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 20:38:57.962804 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 20:38:57.962811 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 20:38:57.962818 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 20:38:57.962825 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 20:38:57.962831 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:38:57.962840 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:38:57.962846 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:38:57.962853 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:38:57.962860 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:38:57.962867 kernel: arm-pv: using stolen time PV Feb 13 20:38:57.962874 kernel: Console: colour dummy device 80x25 Feb 13 20:38:57.962890 kernel: ACPI: Core revision 20230628 Feb 13 20:38:57.962898 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:38:57.962905 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:38:57.962912 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:38:57.962921 kernel: landlock: Up and running. Feb 13 20:38:57.962927 kernel: SELinux: Initializing. Feb 13 20:38:57.962934 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:38:57.962941 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:38:57.962948 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:38:57.962955 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:38:57.962962 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:38:57.962969 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:38:57.962976 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 20:38:57.962984 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 20:38:57.962991 kernel: Remapping and enabling EFI services. Feb 13 20:38:57.962998 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:38:57.963005 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:38:57.963012 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 20:38:57.963019 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 20:38:57.963030 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:38:57.963037 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:38:57.963044 kernel: Detected PIPT I-cache on CPU2 Feb 13 20:38:57.963051 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 20:38:57.963059 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 20:38:57.963067 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:38:57.963078 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 20:38:57.963086 kernel: Detected PIPT I-cache on CPU3 Feb 13 20:38:57.963094 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 20:38:57.963101 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 20:38:57.963108 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:38:57.963115 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 20:38:57.963122 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 20:38:57.963131 kernel: SMP: Total of 4 processors activated. Feb 13 20:38:57.963138 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:38:57.963145 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:38:57.963152 kernel: CPU features: detected: Common not Private translations Feb 13 20:38:57.963160 kernel: CPU features: detected: CRC32 instructions Feb 13 20:38:57.963167 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 20:38:57.963174 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:38:57.963181 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:38:57.963190 kernel: CPU features: detected: Privileged Access Never Feb 13 20:38:57.963197 kernel: CPU features: detected: RAS Extension Support Feb 13 20:38:57.963204 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 20:38:57.963214 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:38:57.963222 kernel: alternatives: applying system-wide alternatives Feb 13 20:38:57.963229 kernel: devtmpfs: initialized Feb 13 20:38:57.963236 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:38:57.963244 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 20:38:57.963251 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:38:57.963260 kernel: SMBIOS 3.0.0 present. Feb 13 20:38:57.963268 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 20:38:57.963275 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:38:57.963282 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:38:57.963290 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:38:57.963297 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:38:57.963304 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:38:57.963312 kernel: audit: type=2000 audit(0.027:1): state=initialized audit_enabled=0 res=1 Feb 13 20:38:57.963319 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:38:57.963327 kernel: cpuidle: using governor menu Feb 13 20:38:57.963334 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:38:57.963342 kernel: ASID allocator initialised with 32768 entries Feb 13 20:38:57.963349 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:38:57.963356 kernel: Serial: AMBA PL011 UART driver Feb 13 20:38:57.963363 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:38:57.963370 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:38:57.963378 kernel: Modules: 509040 pages in range for PLT usage Feb 13 20:38:57.963385 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:38:57.963397 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:38:57.963404 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:38:57.963411 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:38:57.963418 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:38:57.963426 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:38:57.963433 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:38:57.963440 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:38:57.963447 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:38:57.963454 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:38:57.963463 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:38:57.963470 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:38:57.963477 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:38:57.963484 kernel: ACPI: Interpreter enabled Feb 13 20:38:57.963491 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:38:57.963498 kernel: ACPI: MCFG table detected, 1 entries Feb 13 20:38:57.963506 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:38:57.963513 kernel: printk: console [ttyAMA0] enabled Feb 13 20:38:57.963520 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:38:57.963655 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:38:57.963728 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 20:38:57.963792 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 20:38:57.963855 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 20:38:57.963960 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 20:38:57.963971 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 20:38:57.963979 kernel: PCI host bridge to bus 0000:00 Feb 13 20:38:57.964056 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 20:38:57.964114 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 20:38:57.964172 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 20:38:57.964240 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:38:57.964317 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 20:38:57.964395 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:38:57.964466 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 20:38:57.964529 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 20:38:57.964594 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:38:57.964659 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:38:57.964722 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 20:38:57.964786 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 20:38:57.964844 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 20:38:57.964921 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 20:38:57.964984 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 20:38:57.964993 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 20:38:57.965001 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 20:38:57.965008 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 20:38:57.965016 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 20:38:57.965023 kernel: iommu: Default domain type: Translated Feb 13 20:38:57.965030 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:38:57.965038 kernel: efivars: Registered efivars operations Feb 13 20:38:57.965047 kernel: vgaarb: loaded Feb 13 20:38:57.965054 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:38:57.965061 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:38:57.965069 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:38:57.965076 kernel: pnp: PnP ACPI init Feb 13 20:38:57.965146 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 20:38:57.965156 kernel: pnp: PnP ACPI: found 1 devices Feb 13 20:38:57.965163 kernel: NET: Registered PF_INET protocol family Feb 13 20:38:57.965173 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:38:57.965180 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:38:57.965188 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:38:57.965195 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:38:57.965203 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:38:57.965210 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:38:57.965225 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:38:57.965232 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:38:57.965240 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:38:57.965249 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:38:57.965257 kernel: kvm [1]: HYP mode not available Feb 13 20:38:57.965264 kernel: Initialise system trusted keyrings Feb 13 20:38:57.965271 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:38:57.965278 kernel: Key type asymmetric registered Feb 13 20:38:57.965285 kernel: Asymmetric key parser 'x509' registered Feb 13 20:38:57.965293 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:38:57.965300 kernel: io scheduler mq-deadline registered Feb 13 20:38:57.965307 kernel: io scheduler kyber registered Feb 13 20:38:57.965316 kernel: io scheduler bfq registered Feb 13 20:38:57.965324 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 20:38:57.965331 kernel: ACPI: button: Power Button [PWRB] Feb 13 20:38:57.965338 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 20:38:57.965417 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 20:38:57.965428 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:38:57.965435 kernel: thunder_xcv, ver 1.0 Feb 13 20:38:57.965442 kernel: thunder_bgx, ver 1.0 Feb 13 20:38:57.965449 kernel: nicpf, ver 1.0 Feb 13 20:38:57.965459 kernel: nicvf, ver 1.0 Feb 13 20:38:57.965541 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:38:57.965604 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:38:57 UTC (1739479137) Feb 13 20:38:57.965614 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:38:57.965622 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 20:38:57.965630 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:38:57.965637 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:38:57.965644 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:38:57.965654 kernel: Segment Routing with IPv6 Feb 13 20:38:57.965661 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:38:57.965668 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:38:57.965675 kernel: Key type dns_resolver registered Feb 13 20:38:57.965682 kernel: registered taskstats version 1 Feb 13 20:38:57.965690 kernel: Loading compiled-in X.509 certificates Feb 13 20:38:57.965697 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 20:38:57.965704 kernel: Key type .fscrypt registered Feb 13 20:38:57.965712 kernel: Key type fscrypt-provisioning registered Feb 13 20:38:57.965721 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:38:57.965728 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:38:57.965735 kernel: ima: No architecture policies found Feb 13 20:38:57.965742 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:38:57.965750 kernel: clk: Disabling unused clocks Feb 13 20:38:57.965757 kernel: Freeing unused kernel memory: 39360K Feb 13 20:38:57.965765 kernel: Run /init as init process Feb 13 20:38:57.965772 kernel: with arguments: Feb 13 20:38:57.965779 kernel: /init Feb 13 20:38:57.965788 kernel: with environment: Feb 13 20:38:57.965795 kernel: HOME=/ Feb 13 20:38:57.965802 kernel: TERM=linux Feb 13 20:38:57.965809 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:38:57.965819 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:38:57.965828 systemd[1]: Detected virtualization kvm. Feb 13 20:38:57.965836 systemd[1]: Detected architecture arm64. Feb 13 20:38:57.965844 systemd[1]: Running in initrd. Feb 13 20:38:57.965853 systemd[1]: No hostname configured, using default hostname. Feb 13 20:38:57.965861 systemd[1]: Hostname set to . Feb 13 20:38:57.965869 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:38:57.965877 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:38:57.965903 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:38:57.965911 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:38:57.965920 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:38:57.965928 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:38:57.965938 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:38:57.965946 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:38:57.965955 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:38:57.965963 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:38:57.965972 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:38:57.965980 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:38:57.965989 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:38:57.965997 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:38:57.966005 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:38:57.966014 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:38:57.966022 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:38:57.966034 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:38:57.966042 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:38:57.966061 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:38:57.966070 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:38:57.966080 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:38:57.966089 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:38:57.966097 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:38:57.966105 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:38:57.966113 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:38:57.966121 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:38:57.966129 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:38:57.966137 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:38:57.966145 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:38:57.966155 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:38:57.966163 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:38:57.966171 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:38:57.966180 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:38:57.966188 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:38:57.966198 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:38:57.966228 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 20:38:57.966247 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:38:57.966258 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:38:57.966267 systemd-journald[238]: Journal started Feb 13 20:38:57.966286 systemd-journald[238]: Runtime Journal (/run/log/journal/fab23239f92440b785b1096b3697c77f) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:38:57.958369 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 20:38:57.970931 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:38:57.970967 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:38:57.973845 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 20:38:57.974801 kernel: Bridge firewalling registered Feb 13 20:38:57.975594 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:38:57.984047 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:38:57.987190 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:38:57.989979 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:38:57.991368 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:38:57.994686 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:38:57.997198 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:38:58.002862 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:38:58.006018 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:38:58.011775 dracut-cmdline[272]: dracut-dracut-053 Feb 13 20:38:58.016931 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:38:58.015046 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:38:58.047698 systemd-resolved[280]: Positive Trust Anchors: Feb 13 20:38:58.047716 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:38:58.047748 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:38:58.052652 systemd-resolved[280]: Defaulting to hostname 'linux'. Feb 13 20:38:58.053754 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:38:58.057774 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:38:58.098907 kernel: SCSI subsystem initialized Feb 13 20:38:58.102902 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:38:58.110908 kernel: iscsi: registered transport (tcp) Feb 13 20:38:58.125930 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:38:58.125969 kernel: QLogic iSCSI HBA Driver Feb 13 20:38:58.170493 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:38:58.183025 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:38:58.201577 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:38:58.201631 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:38:58.201649 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:38:58.246915 kernel: raid6: neonx8 gen() 15773 MB/s Feb 13 20:38:58.263899 kernel: raid6: neonx4 gen() 15654 MB/s Feb 13 20:38:58.280910 kernel: raid6: neonx2 gen() 13264 MB/s Feb 13 20:38:58.297900 kernel: raid6: neonx1 gen() 10453 MB/s Feb 13 20:38:58.314897 kernel: raid6: int64x8 gen() 6958 MB/s Feb 13 20:38:58.331900 kernel: raid6: int64x4 gen() 7335 MB/s Feb 13 20:38:58.348900 kernel: raid6: int64x2 gen() 6123 MB/s Feb 13 20:38:58.365992 kernel: raid6: int64x1 gen() 5052 MB/s Feb 13 20:38:58.366021 kernel: raid6: using algorithm neonx8 gen() 15773 MB/s Feb 13 20:38:58.384021 kernel: raid6: .... xor() 11898 MB/s, rmw enabled Feb 13 20:38:58.384037 kernel: raid6: using neon recovery algorithm Feb 13 20:38:58.388898 kernel: xor: measuring software checksum speed Feb 13 20:38:58.390139 kernel: 8regs : 17529 MB/sec Feb 13 20:38:58.390151 kernel: 32regs : 19669 MB/sec Feb 13 20:38:58.391392 kernel: arm64_neon : 25299 MB/sec Feb 13 20:38:58.391415 kernel: xor: using function: arm64_neon (25299 MB/sec) Feb 13 20:38:58.446903 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:38:58.457804 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:38:58.472069 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:38:58.484282 systemd-udevd[461]: Using default interface naming scheme 'v255'. Feb 13 20:38:58.487482 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:38:58.496119 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:38:58.508823 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Feb 13 20:38:58.534728 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:38:58.547029 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:38:58.587976 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:38:58.596056 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:38:58.610628 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:38:58.614063 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:38:58.616116 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:38:58.618252 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:38:58.626513 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 20:38:58.637822 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 20:38:58.637954 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:38:58.637967 kernel: GPT:9289727 != 19775487 Feb 13 20:38:58.637976 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:38:58.637992 kernel: GPT:9289727 != 19775487 Feb 13 20:38:58.638001 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:38:58.638012 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:38:58.627063 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:38:58.638584 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:38:58.638714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:38:58.641303 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:38:58.642514 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:38:58.642737 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:38:58.644895 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:38:58.655395 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:38:58.658136 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:38:58.664906 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (509) Feb 13 20:38:58.667899 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (516) Feb 13 20:38:58.670767 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:38:58.676022 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:38:58.683586 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:38:58.688312 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:38:58.692398 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:38:58.693759 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:38:58.707042 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:38:58.708947 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:38:58.716240 disk-uuid[550]: Primary Header is updated. Feb 13 20:38:58.716240 disk-uuid[550]: Secondary Entries is updated. Feb 13 20:38:58.716240 disk-uuid[550]: Secondary Header is updated. Feb 13 20:38:58.723997 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:38:58.725280 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:38:59.731910 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:38:59.732542 disk-uuid[553]: The operation has completed successfully. Feb 13 20:38:59.752381 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:38:59.752482 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:38:59.775086 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:38:59.779471 sh[573]: Success Feb 13 20:38:59.791910 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:38:59.822557 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:38:59.836455 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:38:59.839922 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:38:59.848488 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 20:38:59.848533 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:38:59.849642 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:38:59.850477 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:38:59.850493 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:38:59.854374 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:38:59.855809 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:38:59.863046 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:38:59.864669 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:38:59.872431 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:38:59.872474 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:38:59.872486 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:38:59.875903 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:38:59.888292 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:38:59.889670 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:38:59.895289 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:38:59.904078 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:38:59.979413 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:39:00.006073 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:39:00.044228 systemd-networkd[760]: lo: Link UP Feb 13 20:39:00.044238 systemd-networkd[760]: lo: Gained carrier Feb 13 20:39:00.044933 systemd-networkd[760]: Enumeration completed Feb 13 20:39:00.045228 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:39:00.045561 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:39:00.045564 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:39:00.046464 systemd[1]: Reached target network.target - Network. Feb 13 20:39:00.048217 systemd-networkd[760]: eth0: Link UP Feb 13 20:39:00.048220 systemd-networkd[760]: eth0: Gained carrier Feb 13 20:39:00.048228 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:39:00.063359 ignition[666]: Ignition 2.19.0 Feb 13 20:39:00.063371 ignition[666]: Stage: fetch-offline Feb 13 20:39:00.063425 ignition[666]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:39:00.063434 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:39:00.063599 ignition[666]: parsed url from cmdline: "" Feb 13 20:39:00.063603 ignition[666]: no config URL provided Feb 13 20:39:00.063608 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:39:00.063617 ignition[666]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:39:00.063641 ignition[666]: op(1): [started] loading QEMU firmware config module Feb 13 20:39:00.063646 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 20:39:00.073944 systemd-networkd[760]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:39:00.076284 ignition[666]: op(1): [finished] loading QEMU firmware config module Feb 13 20:39:00.076310 ignition[666]: QEMU firmware config was not found. Ignoring... Feb 13 20:39:00.098756 ignition[666]: parsing config with SHA512: bd287783c9e4dae766887618075422e45b87fcc7ee4eec04048600a6f2676bb8415778f83df73495f8c4bc43a1d8ad73655c713d7710c4afbed93a3cfe85178e Feb 13 20:39:00.104475 unknown[666]: fetched base config from "system" Feb 13 20:39:00.104492 unknown[666]: fetched user config from "qemu" Feb 13 20:39:00.105070 ignition[666]: fetch-offline: fetch-offline passed Feb 13 20:39:00.106661 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:39:00.105160 ignition[666]: Ignition finished successfully Feb 13 20:39:00.108703 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:39:00.117034 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:39:00.127700 ignition[772]: Ignition 2.19.0 Feb 13 20:39:00.127710 ignition[772]: Stage: kargs Feb 13 20:39:00.127934 ignition[772]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:39:00.127948 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:39:00.130610 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:39:00.128778 ignition[772]: kargs: kargs passed Feb 13 20:39:00.128821 ignition[772]: Ignition finished successfully Feb 13 20:39:00.140085 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:39:00.149251 ignition[780]: Ignition 2.19.0 Feb 13 20:39:00.149263 ignition[780]: Stage: disks Feb 13 20:39:00.149433 ignition[780]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:39:00.149443 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:39:00.150289 ignition[780]: disks: disks passed Feb 13 20:39:00.152351 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:39:00.150336 ignition[780]: Ignition finished successfully Feb 13 20:39:00.155090 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:39:00.156568 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:39:00.158621 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:39:00.160635 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:39:00.162453 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:39:00.164972 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:39:00.178283 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:39:00.182262 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:39:00.202021 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:39:00.243906 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 20:39:00.244092 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:39:00.245371 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:39:00.258970 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:39:00.260703 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:39:00.262178 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:39:00.262230 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:39:00.271503 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) Feb 13 20:39:00.271527 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:39:00.271538 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:39:00.271547 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:39:00.262253 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:39:00.266579 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:39:00.276414 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:39:00.268550 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:39:00.278899 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:39:00.313082 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:39:00.317585 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:39:00.320596 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:39:00.323693 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:39:00.399987 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:39:00.411006 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:39:00.413410 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:39:00.417927 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:39:00.432411 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:39:00.434307 ignition[912]: INFO : Ignition 2.19.0 Feb 13 20:39:00.434307 ignition[912]: INFO : Stage: mount Feb 13 20:39:00.434307 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:39:00.434307 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:39:00.437930 ignition[912]: INFO : mount: mount passed Feb 13 20:39:00.437930 ignition[912]: INFO : Ignition finished successfully Feb 13 20:39:00.437275 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:39:00.447004 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:39:00.847315 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:39:00.857129 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:39:00.863801 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (923) Feb 13 20:39:00.863836 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:39:00.863847 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:39:00.864787 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:39:00.867891 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:39:00.868760 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:39:00.884463 ignition[940]: INFO : Ignition 2.19.0 Feb 13 20:39:00.884463 ignition[940]: INFO : Stage: files Feb 13 20:39:00.886099 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:39:00.886099 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:39:00.886099 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:39:00.889515 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:39:00.889515 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:39:00.892274 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:39:00.892274 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:39:00.892274 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:39:00.891688 unknown[940]: wrote ssh authorized keys file for user: core Feb 13 20:39:00.897494 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:39:00.897494 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 20:39:00.964259 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:39:01.150716 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:39:01.150716 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:39:01.154798 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:39:01.154798 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:39:01.154798 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:39:01.154798 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:39:01.154798 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:39:01.154798 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:39:01.154798 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:39:01.154798 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:39:01.154798 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:39:01.154798 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 20:39:01.154798 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 20:39:01.154798 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 20:39:01.154798 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 20:39:01.236038 systemd-networkd[760]: eth0: Gained IPv6LL Feb 13 20:39:01.475532 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:39:01.713866 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 20:39:01.713866 ignition[940]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:39:01.717404 ignition[940]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:39:01.717404 ignition[940]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:39:01.717404 ignition[940]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:39:01.717404 ignition[940]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 20:39:01.717404 ignition[940]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:39:01.717404 ignition[940]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:39:01.717404 ignition[940]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 20:39:01.717404 ignition[940]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 20:39:01.740154 ignition[940]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:39:01.743808 ignition[940]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:39:01.746454 ignition[940]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 20:39:01.746454 ignition[940]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:39:01.746454 ignition[940]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:39:01.746454 ignition[940]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:39:01.746454 ignition[940]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:39:01.746454 ignition[940]: INFO : files: files passed Feb 13 20:39:01.746454 ignition[940]: INFO : Ignition finished successfully Feb 13 20:39:01.746851 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:39:01.762033 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:39:01.764027 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:39:01.767672 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:39:01.767797 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:39:01.771561 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 20:39:01.773734 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:39:01.773734 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:39:01.776966 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:39:01.776828 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:39:01.778461 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:39:01.789043 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:39:01.809707 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:39:01.809826 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:39:01.812290 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:39:01.814189 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:39:01.816084 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:39:01.816906 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:39:01.832944 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:39:01.846067 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:39:01.854282 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:39:01.855612 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:39:01.857797 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:39:01.859704 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:39:01.859843 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:39:01.862500 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:39:01.864654 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:39:01.866410 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:39:01.868125 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:39:01.870114 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:39:01.872170 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:39:01.874118 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:39:01.876274 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:39:01.878347 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:39:01.880166 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:39:01.881799 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:39:01.881953 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:39:01.884410 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:39:01.886443 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:39:01.888499 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:39:01.892933 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:39:01.894279 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:39:01.894399 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:39:01.897401 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:39:01.897528 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:39:01.899685 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:39:01.901378 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:39:01.905941 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:39:01.907360 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:39:01.909597 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:39:01.911325 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:39:01.911425 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:39:01.913099 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:39:01.913197 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:39:01.914825 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:39:01.914956 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:39:01.916853 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:39:01.916970 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:39:01.929154 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:39:01.930980 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:39:01.931129 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:39:01.934406 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:39:01.935387 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:39:01.935526 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:39:01.937621 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:39:01.937732 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:39:01.944390 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:39:01.945437 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:39:01.948724 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:39:01.950038 ignition[995]: INFO : Ignition 2.19.0 Feb 13 20:39:01.950038 ignition[995]: INFO : Stage: umount Feb 13 20:39:01.950038 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:39:01.950038 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:39:01.950038 ignition[995]: INFO : umount: umount passed Feb 13 20:39:01.950038 ignition[995]: INFO : Ignition finished successfully Feb 13 20:39:01.950316 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:39:01.951011 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:39:01.953207 systemd[1]: Stopped target network.target - Network. Feb 13 20:39:01.954535 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:39:01.954612 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:39:01.955690 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:39:01.955739 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:39:01.957871 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:39:01.957937 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:39:01.961948 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:39:01.962007 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:39:01.964138 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:39:01.965792 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:39:01.973930 systemd-networkd[760]: eth0: DHCPv6 lease lost Feb 13 20:39:01.976373 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:39:01.976522 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:39:01.978494 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:39:01.978612 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:39:01.983522 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:39:01.983610 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:39:01.993986 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:39:01.994946 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:39:01.995016 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:39:01.997124 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:39:01.997181 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:39:01.999121 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:39:01.999166 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:39:02.001481 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:39:02.001528 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:39:02.003468 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:39:02.014119 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:39:02.014252 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:39:02.020716 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:39:02.021796 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:39:02.022930 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:39:02.022986 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:39:02.025224 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:39:02.025355 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:39:02.027637 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:39:02.027705 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:39:02.029199 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:39:02.029239 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:39:02.031024 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:39:02.031076 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:39:02.034341 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:39:02.034388 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:39:02.037226 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:39:02.037279 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:39:02.048067 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:39:02.049162 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:39:02.049239 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:39:02.051422 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:39:02.051471 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:39:02.056488 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:39:02.057941 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:39:02.059316 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:39:02.061875 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:39:02.071004 systemd[1]: Switching root. Feb 13 20:39:02.103103 systemd-journald[238]: Journal stopped Feb 13 20:39:02.848480 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 20:39:02.848536 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:39:02.848548 kernel: SELinux: policy capability open_perms=1 Feb 13 20:39:02.848558 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:39:02.848570 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:39:02.848580 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:39:02.848589 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:39:02.848599 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:39:02.848609 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:39:02.848618 kernel: audit: type=1403 audit(1739479142.285:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:39:02.848629 systemd[1]: Successfully loaded SELinux policy in 33.341ms. Feb 13 20:39:02.848649 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.728ms. Feb 13 20:39:02.848663 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:39:02.848674 systemd[1]: Detected virtualization kvm. Feb 13 20:39:02.848684 systemd[1]: Detected architecture arm64. Feb 13 20:39:02.848695 systemd[1]: Detected first boot. Feb 13 20:39:02.848705 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:39:02.848716 zram_generator::config[1040]: No configuration found. Feb 13 20:39:02.848731 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:39:02.848741 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:39:02.848751 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:39:02.848763 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:39:02.848774 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:39:02.848785 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:39:02.848795 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:39:02.848806 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:39:02.848817 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:39:02.848827 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:39:02.848838 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:39:02.848850 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:39:02.848860 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:39:02.848871 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:39:02.848896 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:39:02.848908 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:39:02.848919 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:39:02.848929 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:39:02.848940 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 20:39:02.848957 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:39:02.848972 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:39:02.848982 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:39:02.848993 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:39:02.849004 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:39:02.849015 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:39:02.849026 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:39:02.849036 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:39:02.849046 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:39:02.849059 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:39:02.849070 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:39:02.849080 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:39:02.849090 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:39:02.849101 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:39:02.849111 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:39:02.849122 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:39:02.849133 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:39:02.849143 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:39:02.849155 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:39:02.849171 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:39:02.849183 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:39:02.849194 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:39:02.849205 systemd[1]: Reached target machines.target - Containers. Feb 13 20:39:02.849216 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:39:02.849226 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:39:02.849237 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:39:02.849248 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:39:02.849260 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:39:02.849271 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:39:02.849283 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:39:02.849299 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:39:02.849310 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:39:02.849321 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:39:02.849331 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:39:02.849342 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:39:02.849354 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:39:02.849364 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:39:02.849375 kernel: fuse: init (API version 7.39) Feb 13 20:39:02.849384 kernel: loop: module loaded Feb 13 20:39:02.849394 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:39:02.849406 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:39:02.849416 kernel: ACPI: bus type drm_connector registered Feb 13 20:39:02.849427 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:39:02.849437 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:39:02.849450 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:39:02.849487 systemd-journald[1108]: Collecting audit messages is disabled. Feb 13 20:39:02.849513 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:39:02.849527 systemd[1]: Stopped verity-setup.service. Feb 13 20:39:02.849541 systemd-journald[1108]: Journal started Feb 13 20:39:02.849561 systemd-journald[1108]: Runtime Journal (/run/log/journal/fab23239f92440b785b1096b3697c77f) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:39:02.633841 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:39:02.648924 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:39:02.649316 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:39:02.854329 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:39:02.855006 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:39:02.856318 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:39:02.857578 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:39:02.858807 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:39:02.860140 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:39:02.861534 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:39:02.864037 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:39:02.866912 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:39:02.868490 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:39:02.868636 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:39:02.870174 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:39:02.870330 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:39:02.873283 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:39:02.873438 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:39:02.874829 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:39:02.875022 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:39:02.877274 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:39:02.877416 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:39:02.879012 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:39:02.879961 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:39:02.881448 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:39:02.882996 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:39:02.885968 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:39:02.898793 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:39:02.911016 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:39:02.913226 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:39:02.914387 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:39:02.914432 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:39:02.916474 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:39:02.918943 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:39:02.921118 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:39:02.922249 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:39:02.923848 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:39:02.926192 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:39:02.927537 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:39:02.932105 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:39:02.933937 systemd-journald[1108]: Time spent on flushing to /var/log/journal/fab23239f92440b785b1096b3697c77f is 24.898ms for 851 entries. Feb 13 20:39:02.933937 systemd-journald[1108]: System Journal (/var/log/journal/fab23239f92440b785b1096b3697c77f) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:39:02.969754 systemd-journald[1108]: Received client request to flush runtime journal. Feb 13 20:39:02.969798 kernel: loop0: detected capacity change from 0 to 114328 Feb 13 20:39:02.935255 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:39:02.937276 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:39:02.943525 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:39:02.947172 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:39:02.953927 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:39:02.955540 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:39:02.960707 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:39:02.967092 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:39:02.973122 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:39:02.975051 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:39:02.976931 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:39:02.977289 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:39:02.984821 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:39:02.996181 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:39:03.002791 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:39:03.003933 kernel: loop1: detected capacity change from 0 to 114432 Feb 13 20:39:03.004504 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:39:03.020149 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:39:03.022252 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:39:03.023499 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:39:03.035606 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:39:03.051791 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Feb 13 20:39:03.051807 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Feb 13 20:39:03.056239 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:39:03.058927 kernel: loop2: detected capacity change from 0 to 189592 Feb 13 20:39:03.100900 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 20:39:03.105911 kernel: loop4: detected capacity change from 0 to 114432 Feb 13 20:39:03.110031 kernel: loop5: detected capacity change from 0 to 189592 Feb 13 20:39:03.113916 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 20:39:03.114293 (sd-merge)[1177]: Merged extensions into '/usr'. Feb 13 20:39:03.117697 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:39:03.117716 systemd[1]: Reloading... Feb 13 20:39:03.170904 zram_generator::config[1203]: No configuration found. Feb 13 20:39:03.206348 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:39:03.274362 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:39:03.310371 systemd[1]: Reloading finished in 192 ms. Feb 13 20:39:03.341425 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:39:03.344912 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:39:03.364105 systemd[1]: Starting ensure-sysext.service... Feb 13 20:39:03.366615 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:39:03.372876 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:39:03.372911 systemd[1]: Reloading... Feb 13 20:39:03.387702 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:39:03.388004 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:39:03.388683 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:39:03.388936 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Feb 13 20:39:03.388996 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Feb 13 20:39:03.391173 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:39:03.391186 systemd-tmpfiles[1239]: Skipping /boot Feb 13 20:39:03.402186 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:39:03.402199 systemd-tmpfiles[1239]: Skipping /boot Feb 13 20:39:03.420916 zram_generator::config[1266]: No configuration found. Feb 13 20:39:03.508984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:39:03.545562 systemd[1]: Reloading finished in 172 ms. Feb 13 20:39:03.566047 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:39:03.574290 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:39:03.582303 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:39:03.584834 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:39:03.587562 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:39:03.593314 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:39:03.603351 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:39:03.613870 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:39:03.616909 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:39:03.624856 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:39:03.634215 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:39:03.639206 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:39:03.644129 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:39:03.645936 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:39:03.649992 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:39:03.659200 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:39:03.664010 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:39:03.664120 systemd-udevd[1308]: Using default interface naming scheme 'v255'. Feb 13 20:39:03.666050 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:39:03.666202 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:39:03.667989 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:39:03.668119 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:39:03.669994 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:39:03.670120 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:39:03.671808 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:39:03.681990 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:39:03.685346 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:39:03.697322 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:39:03.700314 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:39:03.708254 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:39:03.709411 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:39:03.711430 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:39:03.712722 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:39:03.713819 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:39:03.720806 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:39:03.720991 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:39:03.721496 augenrules[1357]: No rules Feb 13 20:39:03.726013 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:39:03.729673 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:39:03.731401 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:39:03.731559 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:39:03.733477 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:39:03.733608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:39:03.766940 systemd[1]: Finished ensure-sysext.service. Feb 13 20:39:03.769954 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1346) Feb 13 20:39:03.771064 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 20:39:03.777737 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:39:03.780097 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:39:03.782789 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:39:03.785250 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:39:03.789192 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:39:03.790325 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:39:03.792080 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:39:03.794473 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:39:03.794898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:39:03.795330 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:39:03.798742 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:39:03.802823 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:39:03.804935 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:39:03.807388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:39:03.807552 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:39:03.809226 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:39:03.809385 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:39:03.823854 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:39:03.825088 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:39:03.825213 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:39:03.860779 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:39:03.881312 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:39:03.883853 systemd-networkd[1356]: lo: Link UP Feb 13 20:39:03.883867 systemd-networkd[1356]: lo: Gained carrier Feb 13 20:39:03.893149 systemd-networkd[1356]: Enumeration completed Feb 13 20:39:03.893312 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:39:03.896777 systemd-resolved[1306]: Positive Trust Anchors: Feb 13 20:39:03.896797 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:39:03.896830 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:39:03.899093 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:39:03.900814 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:39:03.900818 systemd-networkd[1356]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:39:03.901667 systemd-networkd[1356]: eth0: Link UP Feb 13 20:39:03.901679 systemd-networkd[1356]: eth0: Gained carrier Feb 13 20:39:03.901692 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:39:03.916684 systemd-resolved[1306]: Defaulting to hostname 'linux'. Feb 13 20:39:03.918991 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:39:03.922969 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:39:03.924530 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:39:03.927328 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:39:03.933252 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:39:03.934699 systemd[1]: Reached target network.target - Network. Feb 13 20:39:03.935661 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:39:03.937962 systemd-networkd[1356]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:39:03.938936 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Feb 13 20:39:03.475404 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 20:39:03.486420 systemd-journald[1108]: Time jumped backwards, rotating. Feb 13 20:39:03.475457 systemd-timesyncd[1382]: Initial clock synchronization to Thu 2025-02-13 20:39:03.475252 UTC. Feb 13 20:39:03.482413 systemd-resolved[1306]: Clock change detected. Flushing caches. Feb 13 20:39:03.501024 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:39:03.509997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:39:03.537113 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:39:03.538758 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:39:03.539914 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:39:03.541118 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:39:03.542438 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:39:03.543828 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:39:03.545022 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:39:03.546352 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:39:03.547616 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:39:03.547656 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:39:03.548543 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:39:03.550625 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:39:03.553166 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:39:03.564449 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:39:03.566825 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:39:03.568425 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:39:03.569614 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:39:03.570565 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:39:03.571494 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:39:03.571528 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:39:03.572486 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:39:03.574495 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:39:03.577438 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:39:03.579255 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:39:03.585519 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:39:03.586514 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:39:03.587752 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:39:03.590589 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:39:03.594429 jq[1412]: false Feb 13 20:39:03.593269 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:39:03.597478 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:39:03.597672 extend-filesystems[1413]: Found loop3 Feb 13 20:39:03.597672 extend-filesystems[1413]: Found loop4 Feb 13 20:39:03.597672 extend-filesystems[1413]: Found loop5 Feb 13 20:39:03.605395 extend-filesystems[1413]: Found vda Feb 13 20:39:03.605395 extend-filesystems[1413]: Found vda1 Feb 13 20:39:03.605395 extend-filesystems[1413]: Found vda2 Feb 13 20:39:03.605395 extend-filesystems[1413]: Found vda3 Feb 13 20:39:03.605395 extend-filesystems[1413]: Found usr Feb 13 20:39:03.605395 extend-filesystems[1413]: Found vda4 Feb 13 20:39:03.605395 extend-filesystems[1413]: Found vda6 Feb 13 20:39:03.605395 extend-filesystems[1413]: Found vda7 Feb 13 20:39:03.605395 extend-filesystems[1413]: Found vda9 Feb 13 20:39:03.605395 extend-filesystems[1413]: Checking size of /dev/vda9 Feb 13 20:39:03.602723 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:39:03.620902 dbus-daemon[1411]: [system] SELinux support is enabled Feb 13 20:39:03.622725 extend-filesystems[1413]: Resized partition /dev/vda9 Feb 13 20:39:03.610886 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:39:03.611333 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:39:03.614200 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:39:03.616572 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:39:03.619153 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:39:03.621721 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:39:03.625370 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:39:03.625810 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:39:03.628168 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:39:03.630376 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:39:03.635923 jq[1427]: true Feb 13 20:39:03.636190 extend-filesystems[1434]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:39:03.641494 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1346) Feb 13 20:39:03.641531 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 20:39:03.648666 (ntainerd)[1441]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:39:03.651157 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:39:03.651190 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:39:03.654537 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:39:03.654562 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:39:03.662729 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 20:39:03.662830 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:39:03.667383 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:39:03.673530 extend-filesystems[1434]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:39:03.673530 extend-filesystems[1434]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:39:03.673530 extend-filesystems[1434]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 20:39:03.684673 extend-filesystems[1413]: Resized filesystem in /dev/vda9 Feb 13 20:39:03.689414 update_engine[1425]: I20250213 20:39:03.682669 1425 main.cc:92] Flatcar Update Engine starting Feb 13 20:39:03.675166 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:39:03.675344 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:39:03.678342 systemd-logind[1423]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 20:39:03.678669 systemd-logind[1423]: New seat seat0. Feb 13 20:39:03.679195 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:39:03.692854 jq[1439]: true Feb 13 20:39:03.696932 update_engine[1425]: I20250213 20:39:03.696868 1425 update_check_scheduler.cc:74] Next update check in 8m3s Feb 13 20:39:03.697010 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:39:03.702728 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:39:03.704808 tar[1435]: linux-arm64/helm Feb 13 20:39:03.744943 bash[1468]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:39:03.745954 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:39:03.749975 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:39:03.787492 locksmithd[1453]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:39:03.867013 containerd[1441]: time="2025-02-13T20:39:03.866650268Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:39:03.890746 containerd[1441]: time="2025-02-13T20:39:03.890618868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.893195 containerd[1441]: time="2025-02-13T20:39:03.892996628Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:39:03.893195 containerd[1441]: time="2025-02-13T20:39:03.893040068Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:39:03.893195 containerd[1441]: time="2025-02-13T20:39:03.893059108Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:39:03.893301 containerd[1441]: time="2025-02-13T20:39:03.893202148Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:39:03.893301 containerd[1441]: time="2025-02-13T20:39:03.893218628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.893301 containerd[1441]: time="2025-02-13T20:39:03.893286428Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:39:03.893388 containerd[1441]: time="2025-02-13T20:39:03.893298388Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.893523 containerd[1441]: time="2025-02-13T20:39:03.893490948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:39:03.893523 containerd[1441]: time="2025-02-13T20:39:03.893511908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.893564 containerd[1441]: time="2025-02-13T20:39:03.893524508Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:39:03.893564 containerd[1441]: time="2025-02-13T20:39:03.893534428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.893646 containerd[1441]: time="2025-02-13T20:39:03.893626348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.893871 containerd[1441]: time="2025-02-13T20:39:03.893842988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.893970 containerd[1441]: time="2025-02-13T20:39:03.893952268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:39:03.893994 containerd[1441]: time="2025-02-13T20:39:03.893969108Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:39:03.894077 containerd[1441]: time="2025-02-13T20:39:03.894060548Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:39:03.894117 containerd[1441]: time="2025-02-13T20:39:03.894105988Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:39:03.897297 containerd[1441]: time="2025-02-13T20:39:03.897264148Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:39:03.897342 containerd[1441]: time="2025-02-13T20:39:03.897320228Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:39:03.897342 containerd[1441]: time="2025-02-13T20:39:03.897336548Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:39:03.897375 containerd[1441]: time="2025-02-13T20:39:03.897351508Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:39:03.897375 containerd[1441]: time="2025-02-13T20:39:03.897366108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:39:03.897520 containerd[1441]: time="2025-02-13T20:39:03.897490268Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:39:03.897722 containerd[1441]: time="2025-02-13T20:39:03.897700628Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:39:03.897832 containerd[1441]: time="2025-02-13T20:39:03.897807948Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:39:03.897832 containerd[1441]: time="2025-02-13T20:39:03.897828588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:39:03.897885 containerd[1441]: time="2025-02-13T20:39:03.897841628Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:39:03.897885 containerd[1441]: time="2025-02-13T20:39:03.897855588Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.897885 containerd[1441]: time="2025-02-13T20:39:03.897867668Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.897885 containerd[1441]: time="2025-02-13T20:39:03.897880188Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.897948 containerd[1441]: time="2025-02-13T20:39:03.897893868Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.897948 containerd[1441]: time="2025-02-13T20:39:03.897912508Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.897948 containerd[1441]: time="2025-02-13T20:39:03.897925828Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.897948 containerd[1441]: time="2025-02-13T20:39:03.897937748Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.897948 containerd[1441]: time="2025-02-13T20:39:03.897948508Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.898027 containerd[1441]: time="2025-02-13T20:39:03.897969268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.898027 containerd[1441]: time="2025-02-13T20:39:03.897982828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.898027 containerd[1441]: time="2025-02-13T20:39:03.897995548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.898027 containerd[1441]: time="2025-02-13T20:39:03.898006948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.898027 containerd[1441]: time="2025-02-13T20:39:03.898018948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.898109 containerd[1441]: time="2025-02-13T20:39:03.898035308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.898109 containerd[1441]: time="2025-02-13T20:39:03.898048468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.898109 containerd[1441]: time="2025-02-13T20:39:03.898060588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.898109 containerd[1441]: time="2025-02-13T20:39:03.898073748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.898109 containerd[1441]: time="2025-02-13T20:39:03.898091468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.898109 containerd[1441]: time="2025-02-13T20:39:03.898104788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.898202 containerd[1441]: time="2025-02-13T20:39:03.898116588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.898202 containerd[1441]: time="2025-02-13T20:39:03.898132548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.898202 containerd[1441]: time="2025-02-13T20:39:03.898151508Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:39:03.898202 containerd[1441]: time="2025-02-13T20:39:03.898170268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.898202 containerd[1441]: time="2025-02-13T20:39:03.898182348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.898202 containerd[1441]: time="2025-02-13T20:39:03.898192588Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:39:03.898855 containerd[1441]: time="2025-02-13T20:39:03.898829668Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:39:03.898883 containerd[1441]: time="2025-02-13T20:39:03.898862708Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:39:03.898883 containerd[1441]: time="2025-02-13T20:39:03.898874188Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:39:03.898928 containerd[1441]: time="2025-02-13T20:39:03.898888508Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:39:03.898928 containerd[1441]: time="2025-02-13T20:39:03.898899068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.898928 containerd[1441]: time="2025-02-13T20:39:03.898922628Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:39:03.898983 containerd[1441]: time="2025-02-13T20:39:03.898932148Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:39:03.898983 containerd[1441]: time="2025-02-13T20:39:03.898944388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.899256 containerd[1441]: time="2025-02-13T20:39:03.899199028Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:39:03.899256 containerd[1441]: time="2025-02-13T20:39:03.899258908Z" level=info msg="Connect containerd service" Feb 13 20:39:03.899417 containerd[1441]: time="2025-02-13T20:39:03.899290188Z" level=info msg="using legacy CRI server" Feb 13 20:39:03.899417 containerd[1441]: time="2025-02-13T20:39:03.899297028Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:39:03.899417 containerd[1441]: time="2025-02-13T20:39:03.899395068Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:39:03.900004 containerd[1441]: time="2025-02-13T20:39:03.899975428Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:39:03.900444 containerd[1441]: time="2025-02-13T20:39:03.900332468Z" level=info msg="Start subscribing containerd event" Feb 13 20:39:03.900444 containerd[1441]: time="2025-02-13T20:39:03.900399508Z" level=info msg="Start recovering state" Feb 13 20:39:03.900516 containerd[1441]: time="2025-02-13T20:39:03.900496668Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:39:03.900739 containerd[1441]: time="2025-02-13T20:39:03.900539508Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:39:03.900871 containerd[1441]: time="2025-02-13T20:39:03.900850708Z" level=info msg="Start event monitor" Feb 13 20:39:03.900994 containerd[1441]: time="2025-02-13T20:39:03.900978948Z" level=info msg="Start snapshots syncer" Feb 13 20:39:03.901340 containerd[1441]: time="2025-02-13T20:39:03.901046028Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:39:03.901340 containerd[1441]: time="2025-02-13T20:39:03.901060348Z" level=info msg="Start streaming server" Feb 13 20:39:03.901340 containerd[1441]: time="2025-02-13T20:39:03.901193988Z" level=info msg="containerd successfully booted in 0.036589s" Feb 13 20:39:03.901439 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:39:04.041054 tar[1435]: linux-arm64/LICENSE Feb 13 20:39:04.041232 tar[1435]: linux-arm64/README.md Feb 13 20:39:04.057855 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:39:04.060385 sshd_keygen[1442]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:39:04.080366 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:39:04.093586 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:39:04.098987 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:39:04.100334 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:39:04.102990 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:39:04.115414 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:39:04.130677 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:39:04.132841 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 20:39:04.134098 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:39:04.995436 systemd-networkd[1356]: eth0: Gained IPv6LL Feb 13 20:39:04.997889 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:39:04.999755 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:39:05.011582 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 20:39:05.014050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:05.016171 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:39:05.031724 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 20:39:05.031914 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 20:39:05.033775 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:39:05.035926 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:39:05.494037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:05.495642 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:39:05.497531 (kubelet)[1525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:39:05.497867 systemd[1]: Startup finished in 592ms (kernel) + 4.550s (initrd) + 3.711s (userspace) = 8.854s. Feb 13 20:39:05.925345 kubelet[1525]: E0213 20:39:05.925242 1525 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:39:05.927889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:39:05.928031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:39:10.554921 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:39:10.556012 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:53008.service - OpenSSH per-connection server daemon (10.0.0.1:53008). Feb 13 20:39:10.609998 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 53008 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:39:10.612090 sshd[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:39:10.631072 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:39:10.639552 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:39:10.641441 systemd-logind[1423]: New session 1 of user core. Feb 13 20:39:10.648542 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:39:10.650855 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:39:10.657187 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:39:10.734594 systemd[1542]: Queued start job for default target default.target. Feb 13 20:39:10.743240 systemd[1542]: Created slice app.slice - User Application Slice. Feb 13 20:39:10.743282 systemd[1542]: Reached target paths.target - Paths. Feb 13 20:39:10.743294 systemd[1542]: Reached target timers.target - Timers. Feb 13 20:39:10.744566 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:39:10.754336 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:39:10.754400 systemd[1542]: Reached target sockets.target - Sockets. Feb 13 20:39:10.754411 systemd[1542]: Reached target basic.target - Basic System. Feb 13 20:39:10.754450 systemd[1542]: Reached target default.target - Main User Target. Feb 13 20:39:10.754476 systemd[1542]: Startup finished in 92ms. Feb 13 20:39:10.754730 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:39:10.756110 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:39:10.819011 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:53014.service - OpenSSH per-connection server daemon (10.0.0.1:53014). Feb 13 20:39:10.855411 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 53014 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:39:10.856635 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:39:10.860478 systemd-logind[1423]: New session 2 of user core. Feb 13 20:39:10.871466 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:39:10.923524 sshd[1553]: pam_unix(sshd:session): session closed for user core Feb 13 20:39:10.934993 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:53014.service: Deactivated successfully. Feb 13 20:39:10.936564 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:39:10.939498 systemd-logind[1423]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:39:10.940656 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:53030.service - OpenSSH per-connection server daemon (10.0.0.1:53030). Feb 13 20:39:10.942647 systemd-logind[1423]: Removed session 2. Feb 13 20:39:10.977230 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 53030 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:39:10.978766 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:39:10.982383 systemd-logind[1423]: New session 3 of user core. Feb 13 20:39:10.989474 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:39:11.037974 sshd[1560]: pam_unix(sshd:session): session closed for user core Feb 13 20:39:11.044767 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:53030.service: Deactivated successfully. Feb 13 20:39:11.048025 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:39:11.049718 systemd-logind[1423]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:39:11.051452 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:53038.service - OpenSSH per-connection server daemon (10.0.0.1:53038). Feb 13 20:39:11.052425 systemd-logind[1423]: Removed session 3. Feb 13 20:39:11.088171 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 53038 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:39:11.089816 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:39:11.094443 systemd-logind[1423]: New session 4 of user core. Feb 13 20:39:11.099451 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:39:11.151399 sshd[1567]: pam_unix(sshd:session): session closed for user core Feb 13 20:39:11.174595 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:53038.service: Deactivated successfully. Feb 13 20:39:11.175901 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:39:11.178460 systemd-logind[1423]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:39:11.179523 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:53044.service - OpenSSH per-connection server daemon (10.0.0.1:53044). Feb 13 20:39:11.180237 systemd-logind[1423]: Removed session 4. Feb 13 20:39:11.215225 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 53044 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:39:11.216550 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:39:11.220174 systemd-logind[1423]: New session 5 of user core. Feb 13 20:39:11.232499 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:39:11.296161 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:39:11.298335 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:39:11.616549 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:39:11.616646 (dockerd)[1594]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:39:11.911980 dockerd[1594]: time="2025-02-13T20:39:11.911861268Z" level=info msg="Starting up" Feb 13 20:39:12.047329 dockerd[1594]: time="2025-02-13T20:39:12.047278228Z" level=info msg="Loading containers: start." Feb 13 20:39:12.126339 kernel: Initializing XFRM netlink socket Feb 13 20:39:12.195320 systemd-networkd[1356]: docker0: Link UP Feb 13 20:39:12.211574 dockerd[1594]: time="2025-02-13T20:39:12.211526708Z" level=info msg="Loading containers: done." Feb 13 20:39:12.224505 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1982762816-merged.mount: Deactivated successfully. Feb 13 20:39:12.225961 dockerd[1594]: time="2025-02-13T20:39:12.225913188Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:39:12.226023 dockerd[1594]: time="2025-02-13T20:39:12.226010988Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:39:12.226133 dockerd[1594]: time="2025-02-13T20:39:12.226105388Z" level=info msg="Daemon has completed initialization" Feb 13 20:39:12.251623 dockerd[1594]: time="2025-02-13T20:39:12.251488028Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:39:12.251893 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:39:13.043937 containerd[1441]: time="2025-02-13T20:39:13.043886428Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 20:39:13.594332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount845602195.mount: Deactivated successfully. Feb 13 20:39:15.063083 containerd[1441]: time="2025-02-13T20:39:15.063011108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:15.063414 containerd[1441]: time="2025-02-13T20:39:15.063350548Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620377" Feb 13 20:39:15.064298 containerd[1441]: time="2025-02-13T20:39:15.064256548Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:15.067210 containerd[1441]: time="2025-02-13T20:39:15.067157268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:15.068469 containerd[1441]: time="2025-02-13T20:39:15.068431148Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 2.0245006s" Feb 13 20:39:15.068508 containerd[1441]: time="2025-02-13T20:39:15.068466708Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 20:39:15.069163 containerd[1441]: time="2025-02-13T20:39:15.069126548Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 20:39:16.178463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:39:16.188488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:16.284598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:16.288251 (kubelet)[1805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:39:16.322731 kubelet[1805]: E0213 20:39:16.322647 1805 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:39:16.325484 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:39:16.325627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:39:17.333437 containerd[1441]: time="2025-02-13T20:39:17.333390468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:17.333901 containerd[1441]: time="2025-02-13T20:39:17.333871708Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471775" Feb 13 20:39:17.334793 containerd[1441]: time="2025-02-13T20:39:17.334746988Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:17.337499 containerd[1441]: time="2025-02-13T20:39:17.337460748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:17.339579 containerd[1441]: time="2025-02-13T20:39:17.339539828Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 2.2703766s" Feb 13 20:39:17.339579 containerd[1441]: time="2025-02-13T20:39:17.339576508Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 20:39:17.340200 containerd[1441]: time="2025-02-13T20:39:17.340170828Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 20:39:19.113123 containerd[1441]: time="2025-02-13T20:39:19.113072348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:19.113685 containerd[1441]: time="2025-02-13T20:39:19.113648988Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024542" Feb 13 20:39:19.114392 containerd[1441]: time="2025-02-13T20:39:19.114366028Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:19.117309 containerd[1441]: time="2025-02-13T20:39:19.117263628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:19.118456 containerd[1441]: time="2025-02-13T20:39:19.118426228Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.77821584s" Feb 13 20:39:19.118499 containerd[1441]: time="2025-02-13T20:39:19.118459068Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 20:39:19.119053 containerd[1441]: time="2025-02-13T20:39:19.118897708Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 20:39:20.106351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1527744995.mount: Deactivated successfully. Feb 13 20:39:20.816759 containerd[1441]: time="2025-02-13T20:39:20.816576148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:20.817537 containerd[1441]: time="2025-02-13T20:39:20.817333628Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769258" Feb 13 20:39:20.818329 containerd[1441]: time="2025-02-13T20:39:20.818256348Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:20.820218 containerd[1441]: time="2025-02-13T20:39:20.820163428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:20.821321 containerd[1441]: time="2025-02-13T20:39:20.821226788Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.70229696s" Feb 13 20:39:20.821321 containerd[1441]: time="2025-02-13T20:39:20.821258908Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 20:39:20.821889 containerd[1441]: time="2025-02-13T20:39:20.821730828Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:39:21.313958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3950992092.mount: Deactivated successfully. Feb 13 20:39:22.043571 containerd[1441]: time="2025-02-13T20:39:22.043511668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:22.047343 containerd[1441]: time="2025-02-13T20:39:22.047293348Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 20:39:22.049997 containerd[1441]: time="2025-02-13T20:39:22.049955068Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:22.054892 containerd[1441]: time="2025-02-13T20:39:22.054854108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:22.056101 containerd[1441]: time="2025-02-13T20:39:22.056051508Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.23429s" Feb 13 20:39:22.056101 containerd[1441]: time="2025-02-13T20:39:22.056090988Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 20:39:22.056616 containerd[1441]: time="2025-02-13T20:39:22.056584268Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:39:22.493899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3749619291.mount: Deactivated successfully. Feb 13 20:39:22.497736 containerd[1441]: time="2025-02-13T20:39:22.496909708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:22.497736 containerd[1441]: time="2025-02-13T20:39:22.497471388Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 20:39:22.498189 containerd[1441]: time="2025-02-13T20:39:22.498160308Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:22.500182 containerd[1441]: time="2025-02-13T20:39:22.500153788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:22.500836 containerd[1441]: time="2025-02-13T20:39:22.500807428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 444.19012ms" Feb 13 20:39:22.500902 containerd[1441]: time="2025-02-13T20:39:22.500838628Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 20:39:22.501838 containerd[1441]: time="2025-02-13T20:39:22.501813628Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 20:39:23.134964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992582078.mount: Deactivated successfully. Feb 13 20:39:26.267915 containerd[1441]: time="2025-02-13T20:39:26.267868108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:26.271291 containerd[1441]: time="2025-02-13T20:39:26.271248668Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Feb 13 20:39:26.271994 containerd[1441]: time="2025-02-13T20:39:26.271962708Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:26.275597 containerd[1441]: time="2025-02-13T20:39:26.275554628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:26.277156 containerd[1441]: time="2025-02-13T20:39:26.277117028Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.77514832s" Feb 13 20:39:26.277249 containerd[1441]: time="2025-02-13T20:39:26.277231268Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 20:39:26.575933 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:39:26.587561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:26.674058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:26.677106 (kubelet)[1959]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:39:26.715224 kubelet[1959]: E0213 20:39:26.715177 1959 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:39:26.717796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:39:26.717933 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:39:30.423850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:30.435610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:30.454525 systemd[1]: Reloading requested from client PID 1978 ('systemctl') (unit session-5.scope)... Feb 13 20:39:30.454541 systemd[1]: Reloading... Feb 13 20:39:30.521379 zram_generator::config[2016]: No configuration found. Feb 13 20:39:30.606132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:39:30.658627 systemd[1]: Reloading finished in 203 ms. Feb 13 20:39:30.696877 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:30.699427 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:39:30.699608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:30.700976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:30.787172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:30.790647 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:39:30.825162 kubelet[2065]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:39:30.825162 kubelet[2065]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:39:30.825162 kubelet[2065]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:39:30.825497 kubelet[2065]: I0213 20:39:30.825212 2065 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:39:32.015190 kubelet[2065]: I0213 20:39:32.015143 2065 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:39:32.015190 kubelet[2065]: I0213 20:39:32.015178 2065 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:39:32.015554 kubelet[2065]: I0213 20:39:32.015450 2065 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:39:32.056714 kubelet[2065]: E0213 20:39:32.056674 2065 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:32.058664 kubelet[2065]: I0213 20:39:32.058622 2065 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:39:32.065581 kubelet[2065]: E0213 20:39:32.065547 2065 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:39:32.065581 kubelet[2065]: I0213 20:39:32.065579 2065 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:39:32.070856 kubelet[2065]: I0213 20:39:32.070818 2065 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:39:32.071719 kubelet[2065]: I0213 20:39:32.071691 2065 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:39:32.071878 kubelet[2065]: I0213 20:39:32.071837 2065 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:39:32.072047 kubelet[2065]: I0213 20:39:32.071872 2065 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:39:32.072178 kubelet[2065]: I0213 20:39:32.072168 2065 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:39:32.072203 kubelet[2065]: I0213 20:39:32.072179 2065 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:39:32.072389 kubelet[2065]: I0213 20:39:32.072377 2065 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:39:32.075777 kubelet[2065]: I0213 20:39:32.075742 2065 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:39:32.075777 kubelet[2065]: I0213 20:39:32.075775 2065 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:39:32.076490 kubelet[2065]: I0213 20:39:32.075860 2065 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:39:32.076490 kubelet[2065]: I0213 20:39:32.075888 2065 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:39:32.077966 kubelet[2065]: W0213 20:39:32.077831 2065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:39:32.077966 kubelet[2065]: E0213 20:39:32.077906 2065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:32.077966 kubelet[2065]: I0213 20:39:32.077859 2065 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:39:32.078140 kubelet[2065]: W0213 20:39:32.077993 2065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:39:32.078140 kubelet[2065]: E0213 20:39:32.078037 2065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:32.079913 kubelet[2065]: I0213 20:39:32.079888 2065 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:39:32.080757 kubelet[2065]: W0213 20:39:32.080729 2065 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:39:32.081587 kubelet[2065]: I0213 20:39:32.081560 2065 server.go:1269] "Started kubelet" Feb 13 20:39:32.081836 kubelet[2065]: I0213 20:39:32.081803 2065 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:39:32.084041 kubelet[2065]: I0213 20:39:32.083796 2065 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:39:32.087040 kubelet[2065]: I0213 20:39:32.084543 2065 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:39:32.087040 kubelet[2065]: I0213 20:39:32.085484 2065 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:39:32.087040 kubelet[2065]: I0213 20:39:32.086753 2065 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:39:32.087040 kubelet[2065]: I0213 20:39:32.086945 2065 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:39:32.087040 kubelet[2065]: I0213 20:39:32.086960 2065 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:39:32.087040 kubelet[2065]: I0213 20:39:32.087029 2065 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:39:32.087205 kubelet[2065]: I0213 20:39:32.087093 2065 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:39:32.088028 kubelet[2065]: E0213 20:39:32.087408 2065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Feb 13 20:39:32.088028 kubelet[2065]: W0213 20:39:32.087410 2065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:39:32.088028 kubelet[2065]: E0213 20:39:32.087459 2065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:32.088028 kubelet[2065]: E0213 20:39:32.087589 2065 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:32.088192 kubelet[2065]: I0213 20:39:32.088029 2065 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:39:32.088192 kubelet[2065]: I0213 20:39:32.088143 2065 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:39:32.089063 kubelet[2065]: E0213 20:39:32.089041 2065 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:39:32.089496 kubelet[2065]: I0213 20:39:32.089475 2065 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:39:32.090006 kubelet[2065]: E0213 20:39:32.088505 2065 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823df17421c42bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:39:32.081533628 +0000 UTC m=+1.287886841,LastTimestamp:2025-02-13 20:39:32.081533628 +0000 UTC m=+1.287886841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:39:32.098671 kubelet[2065]: I0213 20:39:32.098610 2065 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:39:32.100267 kubelet[2065]: I0213 20:39:32.100244 2065 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:39:32.100267 kubelet[2065]: I0213 20:39:32.100269 2065 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:39:32.100410 kubelet[2065]: I0213 20:39:32.100285 2065 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:39:32.100410 kubelet[2065]: E0213 20:39:32.100353 2065 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:39:32.100758 kubelet[2065]: W0213 20:39:32.100724 2065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:39:32.100821 kubelet[2065]: E0213 20:39:32.100766 2065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:32.101756 kubelet[2065]: I0213 20:39:32.101725 2065 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:39:32.101756 kubelet[2065]: I0213 20:39:32.101743 2065 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:39:32.101756 kubelet[2065]: I0213 20:39:32.101759 2065 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:39:32.187884 kubelet[2065]: E0213 20:39:32.187841 2065 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:32.201097 kubelet[2065]: E0213 20:39:32.201075 2065 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:39:32.238066 kubelet[2065]: I0213 20:39:32.238037 2065 policy_none.go:49] "None policy: Start" Feb 13 20:39:32.238849 kubelet[2065]: I0213 20:39:32.238784 2065 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:39:32.238849 kubelet[2065]: I0213 20:39:32.238841 2065 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:39:32.244683 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:39:32.259520 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:39:32.262132 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:39:32.276991 kubelet[2065]: I0213 20:39:32.276912 2065 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:39:32.277217 kubelet[2065]: I0213 20:39:32.277081 2065 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:39:32.277217 kubelet[2065]: I0213 20:39:32.277118 2065 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:39:32.277976 kubelet[2065]: I0213 20:39:32.277328 2065 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:39:32.279051 kubelet[2065]: E0213 20:39:32.279031 2065 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 20:39:32.288490 kubelet[2065]: E0213 20:39:32.288461 2065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Feb 13 20:39:32.378813 kubelet[2065]: I0213 20:39:32.378784 2065 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:39:32.379180 kubelet[2065]: E0213 20:39:32.379140 2065 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Feb 13 20:39:32.409744 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 20:39:32.437082 systemd[1]: Created slice kubepods-burstable-podabdde6388dc30f20066225ecf995099e.slice - libcontainer container kubepods-burstable-podabdde6388dc30f20066225ecf995099e.slice. Feb 13 20:39:32.450648 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 20:39:32.489110 kubelet[2065]: I0213 20:39:32.489048 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:39:32.489110 kubelet[2065]: I0213 20:39:32.489087 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abdde6388dc30f20066225ecf995099e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"abdde6388dc30f20066225ecf995099e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:32.489110 kubelet[2065]: I0213 20:39:32.489108 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abdde6388dc30f20066225ecf995099e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"abdde6388dc30f20066225ecf995099e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:32.489230 kubelet[2065]: I0213 20:39:32.489125 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:32.489230 kubelet[2065]: I0213 20:39:32.489143 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abdde6388dc30f20066225ecf995099e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"abdde6388dc30f20066225ecf995099e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:32.489230 kubelet[2065]: I0213 20:39:32.489158 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:32.489230 kubelet[2065]: I0213 20:39:32.489182 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:32.489230 kubelet[2065]: I0213 20:39:32.489197 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:32.489353 kubelet[2065]: I0213 20:39:32.489211 2065 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:32.580249 kubelet[2065]: I0213 20:39:32.580172 2065 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:39:32.580721 kubelet[2065]: E0213 20:39:32.580512 2065 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Feb 13 20:39:32.689628 kubelet[2065]: E0213 20:39:32.689579 2065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Feb 13 20:39:32.735232 kubelet[2065]: E0213 20:39:32.735139 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:32.735877 containerd[1441]: time="2025-02-13T20:39:32.735749508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 20:39:32.739246 kubelet[2065]: E0213 20:39:32.739204 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:32.739603 containerd[1441]: time="2025-02-13T20:39:32.739568268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:abdde6388dc30f20066225ecf995099e,Namespace:kube-system,Attempt:0,}" Feb 13 20:39:32.753100 kubelet[2065]: E0213 20:39:32.753070 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:32.753482 containerd[1441]: time="2025-02-13T20:39:32.753450068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 20:39:32.980009 kubelet[2065]: W0213 20:39:32.979874 2065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:39:32.980009 kubelet[2065]: E0213 20:39:32.979949 2065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:32.982044 kubelet[2065]: I0213 20:39:32.982014 2065 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:39:32.982331 kubelet[2065]: E0213 20:39:32.982283 2065 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Feb 13 20:39:33.063495 kubelet[2065]: W0213 20:39:33.063433 2065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:39:33.063495 kubelet[2065]: E0213 20:39:33.063502 2065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:33.077162 kubelet[2065]: W0213 20:39:33.077083 2065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:39:33.077162 kubelet[2065]: E0213 20:39:33.077126 2065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:33.202683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount647040553.mount: Deactivated successfully. Feb 13 20:39:33.207973 containerd[1441]: time="2025-02-13T20:39:33.207916548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:39:33.209907 containerd[1441]: time="2025-02-13T20:39:33.209870868Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:39:33.211246 containerd[1441]: time="2025-02-13T20:39:33.211211668Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 20:39:33.211856 containerd[1441]: time="2025-02-13T20:39:33.211826748Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:39:33.213046 containerd[1441]: time="2025-02-13T20:39:33.213011508Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:39:33.214111 containerd[1441]: time="2025-02-13T20:39:33.214079628Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:39:33.214111 containerd[1441]: time="2025-02-13T20:39:33.214098708Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:39:33.216355 containerd[1441]: time="2025-02-13T20:39:33.216279788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:39:33.218665 containerd[1441]: time="2025-02-13T20:39:33.218635268Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 482.80204ms" Feb 13 20:39:33.221271 containerd[1441]: time="2025-02-13T20:39:33.221226628Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 467.72476ms" Feb 13 20:39:33.221743 containerd[1441]: time="2025-02-13T20:39:33.221705308Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 482.0762ms" Feb 13 20:39:33.403730 containerd[1441]: time="2025-02-13T20:39:33.403199668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:39:33.403730 containerd[1441]: time="2025-02-13T20:39:33.403264388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:39:33.403730 containerd[1441]: time="2025-02-13T20:39:33.403277308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:33.403730 containerd[1441]: time="2025-02-13T20:39:33.403391188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:33.404228 containerd[1441]: time="2025-02-13T20:39:33.404013428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:39:33.404228 containerd[1441]: time="2025-02-13T20:39:33.404061788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:39:33.404228 containerd[1441]: time="2025-02-13T20:39:33.404084748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:33.404228 containerd[1441]: time="2025-02-13T20:39:33.403800508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:39:33.404228 containerd[1441]: time="2025-02-13T20:39:33.403862428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:39:33.404228 containerd[1441]: time="2025-02-13T20:39:33.403878948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:33.404228 containerd[1441]: time="2025-02-13T20:39:33.403955868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:33.404458 containerd[1441]: time="2025-02-13T20:39:33.404169028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:33.421496 systemd[1]: Started cri-containerd-2898e0a002979a8cdf937eae227d9de5da45dc2f05e7bf46185f2057e5295bec.scope - libcontainer container 2898e0a002979a8cdf937eae227d9de5da45dc2f05e7bf46185f2057e5295bec. Feb 13 20:39:33.426553 systemd[1]: Started cri-containerd-733c6fb91543a55c8552e6c6970f049d37f951a0e5ec0de5233c272340d013c0.scope - libcontainer container 733c6fb91543a55c8552e6c6970f049d37f951a0e5ec0de5233c272340d013c0. Feb 13 20:39:33.427926 systemd[1]: Started cri-containerd-9909786cedca48deda9a13223ca46f669254458f5c071033fca8e4e3abef9623.scope - libcontainer container 9909786cedca48deda9a13223ca46f669254458f5c071033fca8e4e3abef9623. Feb 13 20:39:33.455620 containerd[1441]: time="2025-02-13T20:39:33.455423748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"733c6fb91543a55c8552e6c6970f049d37f951a0e5ec0de5233c272340d013c0\"" Feb 13 20:39:33.456629 kubelet[2065]: E0213 20:39:33.456605 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:33.458401 containerd[1441]: time="2025-02-13T20:39:33.458362428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2898e0a002979a8cdf937eae227d9de5da45dc2f05e7bf46185f2057e5295bec\"" Feb 13 20:39:33.459092 kubelet[2065]: E0213 20:39:33.459072 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:33.461404 containerd[1441]: time="2025-02-13T20:39:33.461266388Z" level=info msg="CreateContainer within sandbox \"733c6fb91543a55c8552e6c6970f049d37f951a0e5ec0de5233c272340d013c0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:39:33.461531 containerd[1441]: time="2025-02-13T20:39:33.461382268Z" level=info msg="CreateContainer within sandbox \"2898e0a002979a8cdf937eae227d9de5da45dc2f05e7bf46185f2057e5295bec\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:39:33.464103 containerd[1441]: time="2025-02-13T20:39:33.464049148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:abdde6388dc30f20066225ecf995099e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9909786cedca48deda9a13223ca46f669254458f5c071033fca8e4e3abef9623\"" Feb 13 20:39:33.465447 kubelet[2065]: E0213 20:39:33.465116 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:33.466999 containerd[1441]: time="2025-02-13T20:39:33.466968748Z" level=info msg="CreateContainer within sandbox \"9909786cedca48deda9a13223ca46f669254458f5c071033fca8e4e3abef9623\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:39:33.480192 containerd[1441]: time="2025-02-13T20:39:33.480140468Z" level=info msg="CreateContainer within sandbox \"733c6fb91543a55c8552e6c6970f049d37f951a0e5ec0de5233c272340d013c0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7f5522d6a3a07cd5f2f8de713b779197cd06cd8c020ba3b75390c03644c702b5\"" Feb 13 20:39:33.481031 containerd[1441]: time="2025-02-13T20:39:33.480854708Z" level=info msg="StartContainer for \"7f5522d6a3a07cd5f2f8de713b779197cd06cd8c020ba3b75390c03644c702b5\"" Feb 13 20:39:33.481275 kubelet[2065]: W0213 20:39:33.481221 2065 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Feb 13 20:39:33.481368 kubelet[2065]: E0213 20:39:33.481296 2065 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:39:33.482780 containerd[1441]: time="2025-02-13T20:39:33.482742228Z" level=info msg="CreateContainer within sandbox \"2898e0a002979a8cdf937eae227d9de5da45dc2f05e7bf46185f2057e5295bec\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"63fb31c27825e6e7dae2fa346d057c36fb51f8f3741f228c11686354c0a6ce48\"" Feb 13 20:39:33.483146 containerd[1441]: time="2025-02-13T20:39:33.483122828Z" level=info msg="StartContainer for \"63fb31c27825e6e7dae2fa346d057c36fb51f8f3741f228c11686354c0a6ce48\"" Feb 13 20:39:33.485378 containerd[1441]: time="2025-02-13T20:39:33.485314908Z" level=info msg="CreateContainer within sandbox \"9909786cedca48deda9a13223ca46f669254458f5c071033fca8e4e3abef9623\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dcd6cf1f204e92f0bb8b42c7c5170daa8cddbafccac07c9f2fa9bf271c12f388\"" Feb 13 20:39:33.486383 containerd[1441]: time="2025-02-13T20:39:33.485670148Z" level=info msg="StartContainer for \"dcd6cf1f204e92f0bb8b42c7c5170daa8cddbafccac07c9f2fa9bf271c12f388\"" Feb 13 20:39:33.490386 kubelet[2065]: E0213 20:39:33.490302 2065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="1.6s" Feb 13 20:39:33.513493 systemd[1]: Started cri-containerd-7f5522d6a3a07cd5f2f8de713b779197cd06cd8c020ba3b75390c03644c702b5.scope - libcontainer container 7f5522d6a3a07cd5f2f8de713b779197cd06cd8c020ba3b75390c03644c702b5. Feb 13 20:39:33.517461 systemd[1]: Started cri-containerd-63fb31c27825e6e7dae2fa346d057c36fb51f8f3741f228c11686354c0a6ce48.scope - libcontainer container 63fb31c27825e6e7dae2fa346d057c36fb51f8f3741f228c11686354c0a6ce48. Feb 13 20:39:33.519106 systemd[1]: Started cri-containerd-dcd6cf1f204e92f0bb8b42c7c5170daa8cddbafccac07c9f2fa9bf271c12f388.scope - libcontainer container dcd6cf1f204e92f0bb8b42c7c5170daa8cddbafccac07c9f2fa9bf271c12f388. Feb 13 20:39:33.612743 containerd[1441]: time="2025-02-13T20:39:33.612677148Z" level=info msg="StartContainer for \"7f5522d6a3a07cd5f2f8de713b779197cd06cd8c020ba3b75390c03644c702b5\" returns successfully" Feb 13 20:39:33.613046 containerd[1441]: time="2025-02-13T20:39:33.612707188Z" level=info msg="StartContainer for \"63fb31c27825e6e7dae2fa346d057c36fb51f8f3741f228c11686354c0a6ce48\" returns successfully" Feb 13 20:39:33.613046 containerd[1441]: time="2025-02-13T20:39:33.612787868Z" level=info msg="StartContainer for \"dcd6cf1f204e92f0bb8b42c7c5170daa8cddbafccac07c9f2fa9bf271c12f388\" returns successfully" Feb 13 20:39:33.711187 kubelet[2065]: E0213 20:39:33.711013 2065 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823df17421c42bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:39:32.081533628 +0000 UTC m=+1.287886841,LastTimestamp:2025-02-13 20:39:32.081533628 +0000 UTC m=+1.287886841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:39:33.784907 kubelet[2065]: I0213 20:39:33.784638 2065 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:39:33.785153 kubelet[2065]: E0213 20:39:33.785114 2065 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Feb 13 20:39:34.108446 kubelet[2065]: E0213 20:39:34.108342 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:34.110722 kubelet[2065]: E0213 20:39:34.110704 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:34.112998 kubelet[2065]: E0213 20:39:34.112980 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:35.117797 kubelet[2065]: E0213 20:39:35.117762 2065 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:35.152915 kubelet[2065]: E0213 20:39:35.152882 2065 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 20:39:35.387945 kubelet[2065]: I0213 20:39:35.387008 2065 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:39:35.398905 kubelet[2065]: I0213 20:39:35.398861 2065 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 20:39:35.398905 kubelet[2065]: E0213 20:39:35.398902 2065 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 20:39:35.407958 kubelet[2065]: E0213 20:39:35.407924 2065 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:35.508290 kubelet[2065]: E0213 20:39:35.508226 2065 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:35.608625 kubelet[2065]: E0213 20:39:35.608568 2065 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:35.709538 kubelet[2065]: E0213 20:39:35.709104 2065 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:35.810279 kubelet[2065]: E0213 20:39:35.810218 2065 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:35.910338 kubelet[2065]: E0213 20:39:35.910284 2065 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:36.010946 kubelet[2065]: E0213 20:39:36.010839 2065 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:36.077757 kubelet[2065]: I0213 20:39:36.077677 2065 apiserver.go:52] "Watching apiserver" Feb 13 20:39:36.088039 kubelet[2065]: I0213 20:39:36.087975 2065 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:39:37.245010 systemd[1]: Reloading requested from client PID 2348 ('systemctl') (unit session-5.scope)... Feb 13 20:39:37.245026 systemd[1]: Reloading... Feb 13 20:39:37.305483 zram_generator::config[2390]: No configuration found. Feb 13 20:39:37.384721 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:39:37.447591 systemd[1]: Reloading finished in 202 ms. Feb 13 20:39:37.479993 kubelet[2065]: I0213 20:39:37.479958 2065 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:39:37.480145 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:37.489169 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:39:37.489485 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:37.489537 systemd[1]: kubelet.service: Consumed 1.616s CPU time, 119.4M memory peak, 0B memory swap peak. Feb 13 20:39:37.497642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:37.586204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:37.589945 (kubelet)[2429]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:39:37.620849 kubelet[2429]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:39:37.620849 kubelet[2429]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:39:37.620849 kubelet[2429]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:39:37.621156 kubelet[2429]: I0213 20:39:37.620896 2429 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:39:37.626488 kubelet[2429]: I0213 20:39:37.626118 2429 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:39:37.626488 kubelet[2429]: I0213 20:39:37.626143 2429 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:39:37.626488 kubelet[2429]: I0213 20:39:37.626357 2429 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:39:37.627790 kubelet[2429]: I0213 20:39:37.627772 2429 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:39:37.629898 kubelet[2429]: I0213 20:39:37.629869 2429 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:39:37.635060 kubelet[2429]: E0213 20:39:37.635011 2429 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:39:37.635060 kubelet[2429]: I0213 20:39:37.635043 2429 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:39:37.638491 kubelet[2429]: I0213 20:39:37.637623 2429 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:39:37.638491 kubelet[2429]: I0213 20:39:37.637752 2429 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:39:37.638491 kubelet[2429]: I0213 20:39:37.637842 2429 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:39:37.638491 kubelet[2429]: I0213 20:39:37.637866 2429 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:39:37.638707 kubelet[2429]: I0213 20:39:37.638119 2429 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:39:37.638707 kubelet[2429]: I0213 20:39:37.638130 2429 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:39:37.638707 kubelet[2429]: I0213 20:39:37.638159 2429 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:39:37.638707 kubelet[2429]: I0213 20:39:37.638253 2429 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:39:37.638707 kubelet[2429]: I0213 20:39:37.638277 2429 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:39:37.638707 kubelet[2429]: I0213 20:39:37.638300 2429 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:39:37.638707 kubelet[2429]: I0213 20:39:37.638330 2429 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:39:37.643386 kubelet[2429]: I0213 20:39:37.643362 2429 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:39:37.644511 kubelet[2429]: I0213 20:39:37.644018 2429 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:39:37.644906 kubelet[2429]: I0213 20:39:37.644887 2429 server.go:1269] "Started kubelet" Feb 13 20:39:37.645586 kubelet[2429]: I0213 20:39:37.644946 2429 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:39:37.647238 kubelet[2429]: I0213 20:39:37.645894 2429 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:39:37.650317 kubelet[2429]: I0213 20:39:37.647907 2429 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:39:37.650317 kubelet[2429]: I0213 20:39:37.648726 2429 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:39:37.651623 kubelet[2429]: I0213 20:39:37.651598 2429 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:39:37.654806 kubelet[2429]: I0213 20:39:37.651605 2429 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:39:37.661114 kubelet[2429]: I0213 20:39:37.661092 2429 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:39:37.661341 kubelet[2429]: E0213 20:39:37.661317 2429 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:37.661995 kubelet[2429]: E0213 20:39:37.661973 2429 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:39:37.662938 kubelet[2429]: I0213 20:39:37.662910 2429 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:39:37.663068 kubelet[2429]: I0213 20:39:37.663054 2429 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:39:37.665805 kubelet[2429]: I0213 20:39:37.665775 2429 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:39:37.666676 kubelet[2429]: I0213 20:39:37.666651 2429 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:39:37.666676 kubelet[2429]: I0213 20:39:37.666671 2429 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:39:37.666768 kubelet[2429]: I0213 20:39:37.666746 2429 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:39:37.669441 kubelet[2429]: I0213 20:39:37.669417 2429 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:39:37.669441 kubelet[2429]: I0213 20:39:37.669441 2429 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:39:37.669525 kubelet[2429]: I0213 20:39:37.669457 2429 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:39:37.669525 kubelet[2429]: E0213 20:39:37.669500 2429 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:39:37.698907 kubelet[2429]: I0213 20:39:37.698878 2429 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:39:37.698907 kubelet[2429]: I0213 20:39:37.698900 2429 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:39:37.699033 kubelet[2429]: I0213 20:39:37.698921 2429 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:39:37.699086 kubelet[2429]: I0213 20:39:37.699068 2429 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:39:37.699107 kubelet[2429]: I0213 20:39:37.699085 2429 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:39:37.699107 kubelet[2429]: I0213 20:39:37.699104 2429 policy_none.go:49] "None policy: Start" Feb 13 20:39:37.699755 kubelet[2429]: I0213 20:39:37.699739 2429 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:39:37.699797 kubelet[2429]: I0213 20:39:37.699764 2429 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:39:37.699951 kubelet[2429]: I0213 20:39:37.699935 2429 state_mem.go:75] "Updated machine memory state" Feb 13 20:39:37.703687 kubelet[2429]: I0213 20:39:37.703524 2429 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:39:37.703687 kubelet[2429]: I0213 20:39:37.703683 2429 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:39:37.703765 kubelet[2429]: I0213 20:39:37.703694 2429 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:39:37.704094 kubelet[2429]: I0213 20:39:37.703866 2429 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:39:37.808075 kubelet[2429]: I0213 20:39:37.807899 2429 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 20:39:37.814577 kubelet[2429]: I0213 20:39:37.814544 2429 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 20:39:37.814703 kubelet[2429]: I0213 20:39:37.814632 2429 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 20:39:37.864635 kubelet[2429]: I0213 20:39:37.864597 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:37.864635 kubelet[2429]: I0213 20:39:37.864636 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:37.864789 kubelet[2429]: I0213 20:39:37.864657 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:37.864789 kubelet[2429]: I0213 20:39:37.864674 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:39:37.864789 kubelet[2429]: I0213 20:39:37.864689 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abdde6388dc30f20066225ecf995099e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"abdde6388dc30f20066225ecf995099e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:37.864789 kubelet[2429]: I0213 20:39:37.864707 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abdde6388dc30f20066225ecf995099e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"abdde6388dc30f20066225ecf995099e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:37.864789 kubelet[2429]: I0213 20:39:37.864724 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:37.864936 kubelet[2429]: I0213 20:39:37.864738 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abdde6388dc30f20066225ecf995099e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"abdde6388dc30f20066225ecf995099e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:37.864936 kubelet[2429]: I0213 20:39:37.864755 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:38.078134 kubelet[2429]: E0213 20:39:38.078032 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:38.078225 kubelet[2429]: E0213 20:39:38.078193 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:38.078840 kubelet[2429]: E0213 20:39:38.078416 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:38.640243 kubelet[2429]: I0213 20:39:38.639914 2429 apiserver.go:52] "Watching apiserver" Feb 13 20:39:38.663200 kubelet[2429]: I0213 20:39:38.663159 2429 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:39:38.679947 kubelet[2429]: E0213 20:39:38.679849 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:38.679947 kubelet[2429]: E0213 20:39:38.679895 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:38.680445 kubelet[2429]: E0213 20:39:38.680418 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:38.705481 kubelet[2429]: I0213 20:39:38.705425 2429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.705411008 podStartE2EDuration="1.705411008s" podCreationTimestamp="2025-02-13 20:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:39:38.705276807 +0000 UTC m=+1.112447421" watchObservedRunningTime="2025-02-13 20:39:38.705411008 +0000 UTC m=+1.112581582" Feb 13 20:39:38.705606 kubelet[2429]: I0213 20:39:38.705567 2429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.70555357 podStartE2EDuration="1.70555357s" podCreationTimestamp="2025-02-13 20:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:39:38.698505207 +0000 UTC m=+1.105675781" watchObservedRunningTime="2025-02-13 20:39:38.70555357 +0000 UTC m=+1.112724144" Feb 13 20:39:38.720329 kubelet[2429]: I0213 20:39:38.720245 2429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7202320229999999 podStartE2EDuration="1.720232023s" podCreationTimestamp="2025-02-13 20:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:39:38.712225969 +0000 UTC m=+1.119396543" watchObservedRunningTime="2025-02-13 20:39:38.720232023 +0000 UTC m=+1.127402597" Feb 13 20:39:39.092364 sudo[1577]: pam_unix(sudo:session): session closed for user root Feb 13 20:39:39.094421 sshd[1574]: pam_unix(sshd:session): session closed for user core Feb 13 20:39:39.097755 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:53044.service: Deactivated successfully. Feb 13 20:39:39.099610 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:39:39.099831 systemd[1]: session-5.scope: Consumed 5.391s CPU time, 154.0M memory peak, 0B memory swap peak. Feb 13 20:39:39.100565 systemd-logind[1423]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:39:39.102361 systemd-logind[1423]: Removed session 5. Feb 13 20:39:39.681567 kubelet[2429]: E0213 20:39:39.681531 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:43.792108 kubelet[2429]: E0213 20:39:43.792067 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:44.244752 kubelet[2429]: E0213 20:39:44.244644 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:44.480020 kubelet[2429]: I0213 20:39:44.479794 2429 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:39:44.481648 containerd[1441]: time="2025-02-13T20:39:44.481550563Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:39:44.482074 kubelet[2429]: I0213 20:39:44.481725 2429 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:39:44.598073 systemd[1]: Created slice kubepods-besteffort-pod7e48a5ac_0a87_4b18_b42b_3c7bb374e45c.slice - libcontainer container kubepods-besteffort-pod7e48a5ac_0a87_4b18_b42b_3c7bb374e45c.slice. Feb 13 20:39:44.607843 kubelet[2429]: I0213 20:39:44.607110 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e48a5ac-0a87-4b18-b42b-3c7bb374e45c-kube-proxy\") pod \"kube-proxy-b5zrg\" (UID: \"7e48a5ac-0a87-4b18-b42b-3c7bb374e45c\") " pod="kube-system/kube-proxy-b5zrg" Feb 13 20:39:44.607843 kubelet[2429]: I0213 20:39:44.607148 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/40748397-e6cc-4e80-aa93-47714f7f3a4c-cni-plugin\") pod \"kube-flannel-ds-2bqj5\" (UID: \"40748397-e6cc-4e80-aa93-47714f7f3a4c\") " pod="kube-flannel/kube-flannel-ds-2bqj5" Feb 13 20:39:44.607843 kubelet[2429]: I0213 20:39:44.607165 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40748397-e6cc-4e80-aa93-47714f7f3a4c-xtables-lock\") pod \"kube-flannel-ds-2bqj5\" (UID: \"40748397-e6cc-4e80-aa93-47714f7f3a4c\") " pod="kube-flannel/kube-flannel-ds-2bqj5" Feb 13 20:39:44.607843 kubelet[2429]: I0213 20:39:44.607179 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e48a5ac-0a87-4b18-b42b-3c7bb374e45c-xtables-lock\") pod \"kube-proxy-b5zrg\" (UID: \"7e48a5ac-0a87-4b18-b42b-3c7bb374e45c\") " pod="kube-system/kube-proxy-b5zrg" Feb 13 20:39:44.607843 kubelet[2429]: I0213 20:39:44.607194 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhtgr\" (UniqueName: \"kubernetes.io/projected/7e48a5ac-0a87-4b18-b42b-3c7bb374e45c-kube-api-access-qhtgr\") pod \"kube-proxy-b5zrg\" (UID: \"7e48a5ac-0a87-4b18-b42b-3c7bb374e45c\") " pod="kube-system/kube-proxy-b5zrg" Feb 13 20:39:44.608620 kubelet[2429]: I0213 20:39:44.607694 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/40748397-e6cc-4e80-aa93-47714f7f3a4c-cni\") pod \"kube-flannel-ds-2bqj5\" (UID: \"40748397-e6cc-4e80-aa93-47714f7f3a4c\") " pod="kube-flannel/kube-flannel-ds-2bqj5" Feb 13 20:39:44.608620 kubelet[2429]: I0213 20:39:44.607721 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/40748397-e6cc-4e80-aa93-47714f7f3a4c-flannel-cfg\") pod \"kube-flannel-ds-2bqj5\" (UID: \"40748397-e6cc-4e80-aa93-47714f7f3a4c\") " pod="kube-flannel/kube-flannel-ds-2bqj5" Feb 13 20:39:44.608620 kubelet[2429]: I0213 20:39:44.608380 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78wb5\" (UniqueName: \"kubernetes.io/projected/40748397-e6cc-4e80-aa93-47714f7f3a4c-kube-api-access-78wb5\") pod \"kube-flannel-ds-2bqj5\" (UID: \"40748397-e6cc-4e80-aa93-47714f7f3a4c\") " pod="kube-flannel/kube-flannel-ds-2bqj5" Feb 13 20:39:44.608620 kubelet[2429]: I0213 20:39:44.608409 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e48a5ac-0a87-4b18-b42b-3c7bb374e45c-lib-modules\") pod \"kube-proxy-b5zrg\" (UID: \"7e48a5ac-0a87-4b18-b42b-3c7bb374e45c\") " pod="kube-system/kube-proxy-b5zrg" Feb 13 20:39:44.608620 kubelet[2429]: I0213 20:39:44.608430 2429 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/40748397-e6cc-4e80-aa93-47714f7f3a4c-run\") pod \"kube-flannel-ds-2bqj5\" (UID: \"40748397-e6cc-4e80-aa93-47714f7f3a4c\") " pod="kube-flannel/kube-flannel-ds-2bqj5" Feb 13 20:39:44.625399 systemd[1]: Created slice kubepods-burstable-pod40748397_e6cc_4e80_aa93_47714f7f3a4c.slice - libcontainer container kubepods-burstable-pod40748397_e6cc_4e80_aa93_47714f7f3a4c.slice. Feb 13 20:39:44.687164 kubelet[2429]: E0213 20:39:44.687125 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:44.687646 kubelet[2429]: E0213 20:39:44.687400 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:44.715757 kubelet[2429]: E0213 20:39:44.715730 2429 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 20:39:44.715757 kubelet[2429]: E0213 20:39:44.715757 2429 projected.go:194] Error preparing data for projected volume kube-api-access-78wb5 for pod kube-flannel/kube-flannel-ds-2bqj5: configmap "kube-root-ca.crt" not found Feb 13 20:39:44.715854 kubelet[2429]: E0213 20:39:44.715783 2429 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 20:39:44.715854 kubelet[2429]: E0213 20:39:44.715807 2429 projected.go:194] Error preparing data for projected volume kube-api-access-qhtgr for pod kube-system/kube-proxy-b5zrg: configmap "kube-root-ca.crt" not found Feb 13 20:39:44.715854 kubelet[2429]: E0213 20:39:44.715811 2429 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/40748397-e6cc-4e80-aa93-47714f7f3a4c-kube-api-access-78wb5 podName:40748397-e6cc-4e80-aa93-47714f7f3a4c nodeName:}" failed. No retries permitted until 2025-02-13 20:39:45.215784835 +0000 UTC m=+7.622955369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-78wb5" (UniqueName: "kubernetes.io/projected/40748397-e6cc-4e80-aa93-47714f7f3a4c-kube-api-access-78wb5") pod "kube-flannel-ds-2bqj5" (UID: "40748397-e6cc-4e80-aa93-47714f7f3a4c") : configmap "kube-root-ca.crt" not found Feb 13 20:39:44.715854 kubelet[2429]: E0213 20:39:44.715840 2429 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e48a5ac-0a87-4b18-b42b-3c7bb374e45c-kube-api-access-qhtgr podName:7e48a5ac-0a87-4b18-b42b-3c7bb374e45c nodeName:}" failed. No retries permitted until 2025-02-13 20:39:45.215828036 +0000 UTC m=+7.622998610 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qhtgr" (UniqueName: "kubernetes.io/projected/7e48a5ac-0a87-4b18-b42b-3c7bb374e45c-kube-api-access-qhtgr") pod "kube-proxy-b5zrg" (UID: "7e48a5ac-0a87-4b18-b42b-3c7bb374e45c") : configmap "kube-root-ca.crt" not found Feb 13 20:39:45.524360 kubelet[2429]: E0213 20:39:45.524151 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:45.526776 containerd[1441]: time="2025-02-13T20:39:45.526718135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b5zrg,Uid:7e48a5ac-0a87-4b18-b42b-3c7bb374e45c,Namespace:kube-system,Attempt:0,}" Feb 13 20:39:45.527382 kubelet[2429]: E0213 20:39:45.527062 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:45.527570 containerd[1441]: time="2025-02-13T20:39:45.527534821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-2bqj5,Uid:40748397-e6cc-4e80-aa93-47714f7f3a4c,Namespace:kube-flannel,Attempt:0,}" Feb 13 20:39:45.560476 containerd[1441]: time="2025-02-13T20:39:45.560394868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:39:45.560476 containerd[1441]: time="2025-02-13T20:39:45.560448548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:39:45.560476 containerd[1441]: time="2025-02-13T20:39:45.560459428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:45.560664 containerd[1441]: time="2025-02-13T20:39:45.560528829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:45.561817 containerd[1441]: time="2025-02-13T20:39:45.561587756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:39:45.561817 containerd[1441]: time="2025-02-13T20:39:45.561624277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:39:45.561817 containerd[1441]: time="2025-02-13T20:39:45.561634917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:45.561817 containerd[1441]: time="2025-02-13T20:39:45.561690837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:45.582515 systemd[1]: Started cri-containerd-df1c5d25cbf1cc6b95d1bd8ac91cedd048509f2a17962e4d85736e7efdf08a70.scope - libcontainer container df1c5d25cbf1cc6b95d1bd8ac91cedd048509f2a17962e4d85736e7efdf08a70. Feb 13 20:39:45.585663 systemd[1]: Started cri-containerd-ed85a526e667b48f955bab2a38e3d0b346d5eebc7e3eb6423d7639a8b8191029.scope - libcontainer container ed85a526e667b48f955bab2a38e3d0b346d5eebc7e3eb6423d7639a8b8191029. Feb 13 20:39:45.604835 containerd[1441]: time="2025-02-13T20:39:45.604733800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b5zrg,Uid:7e48a5ac-0a87-4b18-b42b-3c7bb374e45c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed85a526e667b48f955bab2a38e3d0b346d5eebc7e3eb6423d7639a8b8191029\"" Feb 13 20:39:45.605385 kubelet[2429]: E0213 20:39:45.605350 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:45.610056 containerd[1441]: time="2025-02-13T20:39:45.610019399Z" level=info msg="CreateContainer within sandbox \"ed85a526e667b48f955bab2a38e3d0b346d5eebc7e3eb6423d7639a8b8191029\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:39:45.613088 containerd[1441]: time="2025-02-13T20:39:45.613010222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-2bqj5,Uid:40748397-e6cc-4e80-aa93-47714f7f3a4c,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"df1c5d25cbf1cc6b95d1bd8ac91cedd048509f2a17962e4d85736e7efdf08a70\"" Feb 13 20:39:45.613697 kubelet[2429]: E0213 20:39:45.613581 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:45.615393 containerd[1441]: time="2025-02-13T20:39:45.615297639Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:39:45.623335 containerd[1441]: time="2025-02-13T20:39:45.623238858Z" level=info msg="CreateContainer within sandbox \"ed85a526e667b48f955bab2a38e3d0b346d5eebc7e3eb6423d7639a8b8191029\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9f15ba41e6a0b282cc1874cfba7f1d3139499065146f53ab85d3f4016e2d654a\"" Feb 13 20:39:45.623860 containerd[1441]: time="2025-02-13T20:39:45.623823943Z" level=info msg="StartContainer for \"9f15ba41e6a0b282cc1874cfba7f1d3139499065146f53ab85d3f4016e2d654a\"" Feb 13 20:39:45.648499 systemd[1]: Started cri-containerd-9f15ba41e6a0b282cc1874cfba7f1d3139499065146f53ab85d3f4016e2d654a.scope - libcontainer container 9f15ba41e6a0b282cc1874cfba7f1d3139499065146f53ab85d3f4016e2d654a. Feb 13 20:39:45.674556 containerd[1441]: time="2025-02-13T20:39:45.674242201Z" level=info msg="StartContainer for \"9f15ba41e6a0b282cc1874cfba7f1d3139499065146f53ab85d3f4016e2d654a\" returns successfully" Feb 13 20:39:45.691523 kubelet[2429]: E0213 20:39:45.691495 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:45.692474 kubelet[2429]: E0213 20:39:45.692010 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:45.701102 kubelet[2429]: I0213 20:39:45.701044 2429 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b5zrg" podStartSLOduration=1.701031881 podStartE2EDuration="1.701031881s" podCreationTimestamp="2025-02-13 20:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:39:45.70079776 +0000 UTC m=+8.107968334" watchObservedRunningTime="2025-02-13 20:39:45.701031881 +0000 UTC m=+8.108202455" Feb 13 20:39:46.779086 containerd[1441]: time="2025-02-13T20:39:46.779001436Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:39:46.779540 containerd[1441]: time="2025-02-13T20:39:46.779078836Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:39:46.779575 kubelet[2429]: E0213 20:39:46.779246 2429 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:39:46.779575 kubelet[2429]: E0213 20:39:46.779328 2429 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:39:46.779837 kubelet[2429]: E0213 20:39:46.779447 2429 kuberuntime_manager.go:1272] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78wb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-2bqj5_kube-flannel(40748397-e6cc-4e80-aa93-47714f7f3a4c): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:39:46.780727 kubelet[2429]: E0213 20:39:46.780662 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:39:46.998293 kubelet[2429]: E0213 20:39:46.995363 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:47.694974 kubelet[2429]: E0213 20:39:47.694926 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:47.696206 kubelet[2429]: E0213 20:39:47.695197 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:47.696973 kubelet[2429]: E0213 20:39:47.696368 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:39:48.646669 update_engine[1425]: I20250213 20:39:48.646595 1425 update_attempter.cc:509] Updating boot flags... Feb 13 20:39:48.673512 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2750) Feb 13 20:39:48.723335 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2751) Feb 13 20:39:59.670560 kubelet[2429]: E0213 20:39:59.670445 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:59.671785 containerd[1441]: time="2025-02-13T20:39:59.671744817Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:40:00.778441 containerd[1441]: time="2025-02-13T20:40:00.778384190Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:40:00.778842 containerd[1441]: time="2025-02-13T20:40:00.778469390Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:40:00.778878 kubelet[2429]: E0213 20:40:00.778625 2429 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:40:00.778878 kubelet[2429]: E0213 20:40:00.778678 2429 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:40:00.779297 kubelet[2429]: E0213 20:40:00.778761 2429 kuberuntime_manager.go:1272] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78wb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-2bqj5_kube-flannel(40748397-e6cc-4e80-aa93-47714f7f3a4c): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:40:00.780274 kubelet[2429]: E0213 20:40:00.780228 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:40:04.352739 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:58420.service - OpenSSH per-connection server daemon (10.0.0.1:58420). Feb 13 20:40:04.388807 sshd[2758]: Accepted publickey for core from 10.0.0.1 port 58420 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:04.389947 sshd[2758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:04.393870 systemd-logind[1423]: New session 6 of user core. Feb 13 20:40:04.405455 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:40:04.516584 sshd[2758]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:04.520069 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:58420.service: Deactivated successfully. Feb 13 20:40:04.522115 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:40:04.524050 systemd-logind[1423]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:40:04.524949 systemd-logind[1423]: Removed session 6. Feb 13 20:40:09.526922 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:58430.service - OpenSSH per-connection server daemon (10.0.0.1:58430). Feb 13 20:40:09.562635 sshd[2773]: Accepted publickey for core from 10.0.0.1 port 58430 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:09.563909 sshd[2773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:09.567748 systemd-logind[1423]: New session 7 of user core. Feb 13 20:40:09.577445 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:40:09.680536 sshd[2773]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:09.682999 systemd[1]: sshd@6-10.0.0.6:22-10.0.0.1:58430.service: Deactivated successfully. Feb 13 20:40:09.684546 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:40:09.685810 systemd-logind[1423]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:40:09.686716 systemd-logind[1423]: Removed session 7. Feb 13 20:40:12.670198 kubelet[2429]: E0213 20:40:12.670156 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:40:12.670784 kubelet[2429]: E0213 20:40:12.670747 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:40:14.690882 systemd[1]: Started sshd@7-10.0.0.6:22-10.0.0.1:48406.service - OpenSSH per-connection server daemon (10.0.0.1:48406). Feb 13 20:40:14.726624 sshd[2788]: Accepted publickey for core from 10.0.0.1 port 48406 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:14.727778 sshd[2788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:14.730981 systemd-logind[1423]: New session 8 of user core. Feb 13 20:40:14.740455 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:40:14.849649 sshd[2788]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:14.852215 systemd[1]: sshd@7-10.0.0.6:22-10.0.0.1:48406.service: Deactivated successfully. Feb 13 20:40:14.853806 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:40:14.855054 systemd-logind[1423]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:40:14.857793 systemd-logind[1423]: Removed session 8. Feb 13 20:40:19.863916 systemd[1]: Started sshd@8-10.0.0.6:22-10.0.0.1:48420.service - OpenSSH per-connection server daemon (10.0.0.1:48420). Feb 13 20:40:19.899795 sshd[2806]: Accepted publickey for core from 10.0.0.1 port 48420 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:19.900935 sshd[2806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:19.906728 systemd-logind[1423]: New session 9 of user core. Feb 13 20:40:19.913514 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:40:20.017233 sshd[2806]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:20.019499 systemd[1]: sshd@8-10.0.0.6:22-10.0.0.1:48420.service: Deactivated successfully. Feb 13 20:40:20.021224 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:40:20.024522 systemd-logind[1423]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:40:20.025578 systemd-logind[1423]: Removed session 9. Feb 13 20:40:23.671582 kubelet[2429]: E0213 20:40:23.670539 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:40:23.671971 containerd[1441]: time="2025-02-13T20:40:23.671883919Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:40:24.771227 containerd[1441]: time="2025-02-13T20:40:24.771143037Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:40:24.771628 containerd[1441]: time="2025-02-13T20:40:24.771233598Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:40:24.771663 kubelet[2429]: E0213 20:40:24.771359 2429 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:40:24.771663 kubelet[2429]: E0213 20:40:24.771405 2429 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:40:24.772090 kubelet[2429]: E0213 20:40:24.771494 2429 kuberuntime_manager.go:1272] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78wb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-2bqj5_kube-flannel(40748397-e6cc-4e80-aa93-47714f7f3a4c): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:40:24.772673 kubelet[2429]: E0213 20:40:24.772637 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:40:25.026999 systemd[1]: Started sshd@9-10.0.0.6:22-10.0.0.1:39572.service - OpenSSH per-connection server daemon (10.0.0.1:39572). Feb 13 20:40:25.062783 sshd[2821]: Accepted publickey for core from 10.0.0.1 port 39572 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:25.063997 sshd[2821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:25.068880 systemd-logind[1423]: New session 10 of user core. Feb 13 20:40:25.077454 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:40:25.182876 sshd[2821]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:25.186170 systemd[1]: sshd@9-10.0.0.6:22-10.0.0.1:39572.service: Deactivated successfully. Feb 13 20:40:25.188101 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:40:25.188855 systemd-logind[1423]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:40:25.191461 systemd-logind[1423]: Removed session 10. Feb 13 20:40:30.202965 systemd[1]: Started sshd@10-10.0.0.6:22-10.0.0.1:39574.service - OpenSSH per-connection server daemon (10.0.0.1:39574). Feb 13 20:40:30.239109 sshd[2836]: Accepted publickey for core from 10.0.0.1 port 39574 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:30.240271 sshd[2836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:30.244052 systemd-logind[1423]: New session 11 of user core. Feb 13 20:40:30.252522 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:40:30.356902 sshd[2836]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:30.360028 systemd[1]: sshd@10-10.0.0.6:22-10.0.0.1:39574.service: Deactivated successfully. Feb 13 20:40:30.361745 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:40:30.362349 systemd-logind[1423]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:40:30.363276 systemd-logind[1423]: Removed session 11. Feb 13 20:40:35.370795 systemd[1]: Started sshd@11-10.0.0.6:22-10.0.0.1:53000.service - OpenSSH per-connection server daemon (10.0.0.1:53000). Feb 13 20:40:35.407054 sshd[2851]: Accepted publickey for core from 10.0.0.1 port 53000 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:35.408295 sshd[2851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:35.412160 systemd-logind[1423]: New session 12 of user core. Feb 13 20:40:35.421445 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:40:35.528805 sshd[2851]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:35.531973 systemd[1]: sshd@11-10.0.0.6:22-10.0.0.1:53000.service: Deactivated successfully. Feb 13 20:40:35.533698 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:40:35.534296 systemd-logind[1423]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:40:35.535187 systemd-logind[1423]: Removed session 12. Feb 13 20:40:39.670585 kubelet[2429]: E0213 20:40:39.670496 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:40:39.671894 kubelet[2429]: E0213 20:40:39.671812 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:40:40.538827 systemd[1]: Started sshd@12-10.0.0.6:22-10.0.0.1:53014.service - OpenSSH per-connection server daemon (10.0.0.1:53014). Feb 13 20:40:40.575177 sshd[2868]: Accepted publickey for core from 10.0.0.1 port 53014 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:40.576391 sshd[2868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:40.580192 systemd-logind[1423]: New session 13 of user core. Feb 13 20:40:40.590457 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:40:40.695656 sshd[2868]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:40.698874 systemd[1]: sshd@12-10.0.0.6:22-10.0.0.1:53014.service: Deactivated successfully. Feb 13 20:40:40.700599 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:40:40.701258 systemd-logind[1423]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:40:40.702318 systemd-logind[1423]: Removed session 13. Feb 13 20:40:45.705981 systemd[1]: Started sshd@13-10.0.0.6:22-10.0.0.1:41048.service - OpenSSH per-connection server daemon (10.0.0.1:41048). Feb 13 20:40:45.742024 sshd[2883]: Accepted publickey for core from 10.0.0.1 port 41048 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:45.743563 sshd[2883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:45.747047 systemd-logind[1423]: New session 14 of user core. Feb 13 20:40:45.759446 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:40:45.864176 sshd[2883]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:45.868022 systemd[1]: sshd@13-10.0.0.6:22-10.0.0.1:41048.service: Deactivated successfully. Feb 13 20:40:45.870161 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:40:45.870867 systemd-logind[1423]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:40:45.871724 systemd-logind[1423]: Removed session 14. Feb 13 20:40:50.670274 kubelet[2429]: E0213 20:40:50.670096 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:40:50.671200 kubelet[2429]: E0213 20:40:50.670597 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:40:50.874780 systemd[1]: Started sshd@14-10.0.0.6:22-10.0.0.1:41062.service - OpenSSH per-connection server daemon (10.0.0.1:41062). Feb 13 20:40:50.911105 sshd[2900]: Accepted publickey for core from 10.0.0.1 port 41062 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:50.912327 sshd[2900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:50.916380 systemd-logind[1423]: New session 15 of user core. Feb 13 20:40:50.926484 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:40:51.029102 sshd[2900]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:51.032338 systemd[1]: sshd@14-10.0.0.6:22-10.0.0.1:41062.service: Deactivated successfully. Feb 13 20:40:51.034095 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:40:51.034855 systemd-logind[1423]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:40:51.035594 systemd-logind[1423]: Removed session 15. Feb 13 20:40:56.039875 systemd[1]: Started sshd@15-10.0.0.6:22-10.0.0.1:43846.service - OpenSSH per-connection server daemon (10.0.0.1:43846). Feb 13 20:40:56.075832 sshd[2916]: Accepted publickey for core from 10.0.0.1 port 43846 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:56.077006 sshd[2916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:56.080335 systemd-logind[1423]: New session 16 of user core. Feb 13 20:40:56.085505 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:40:56.189621 sshd[2916]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:56.192674 systemd[1]: sshd@15-10.0.0.6:22-10.0.0.1:43846.service: Deactivated successfully. Feb 13 20:40:56.194419 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:40:56.195031 systemd-logind[1423]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:40:56.195954 systemd-logind[1423]: Removed session 16. Feb 13 20:40:56.670609 kubelet[2429]: E0213 20:40:56.670576 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:00.670442 kubelet[2429]: E0213 20:41:00.670400 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:01.201912 systemd[1]: Started sshd@16-10.0.0.6:22-10.0.0.1:43860.service - OpenSSH per-connection server daemon (10.0.0.1:43860). Feb 13 20:41:01.237918 sshd[2932]: Accepted publickey for core from 10.0.0.1 port 43860 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:01.239071 sshd[2932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:01.242758 systemd-logind[1423]: New session 17 of user core. Feb 13 20:41:01.260466 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:41:01.368612 sshd[2932]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:01.371595 systemd[1]: sshd@16-10.0.0.6:22-10.0.0.1:43860.service: Deactivated successfully. Feb 13 20:41:01.373403 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:41:01.374014 systemd-logind[1423]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:41:01.374873 systemd-logind[1423]: Removed session 17. Feb 13 20:41:05.670344 kubelet[2429]: E0213 20:41:05.670268 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:05.671240 containerd[1441]: time="2025-02-13T20:41:05.671203947Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:41:06.378004 systemd[1]: Started sshd@17-10.0.0.6:22-10.0.0.1:53968.service - OpenSSH per-connection server daemon (10.0.0.1:53968). Feb 13 20:41:06.414656 sshd[2947]: Accepted publickey for core from 10.0.0.1 port 53968 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:06.415801 sshd[2947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:06.419652 systemd-logind[1423]: New session 18 of user core. Feb 13 20:41:06.429451 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:41:06.533033 sshd[2947]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:06.535622 systemd[1]: sshd@17-10.0.0.6:22-10.0.0.1:53968.service: Deactivated successfully. Feb 13 20:41:06.537374 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:41:06.538708 systemd-logind[1423]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:41:06.539724 systemd-logind[1423]: Removed session 18. Feb 13 20:41:06.787335 containerd[1441]: time="2025-02-13T20:41:06.787257202Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:41:06.787716 containerd[1441]: time="2025-02-13T20:41:06.787341723Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:41:06.787747 kubelet[2429]: E0213 20:41:06.787504 2429 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:41:06.787747 kubelet[2429]: E0213 20:41:06.787557 2429 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:41:06.787967 kubelet[2429]: E0213 20:41:06.787653 2429 kuberuntime_manager.go:1272] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78wb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-2bqj5_kube-flannel(40748397-e6cc-4e80-aa93-47714f7f3a4c): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:41:06.788882 kubelet[2429]: E0213 20:41:06.788845 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:41:08.670799 kubelet[2429]: E0213 20:41:08.670760 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:11.544006 systemd[1]: Started sshd@18-10.0.0.6:22-10.0.0.1:53980.service - OpenSSH per-connection server daemon (10.0.0.1:53980). Feb 13 20:41:11.579973 sshd[2962]: Accepted publickey for core from 10.0.0.1 port 53980 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:11.581185 sshd[2962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:11.585046 systemd-logind[1423]: New session 19 of user core. Feb 13 20:41:11.596467 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:41:11.704529 sshd[2962]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:11.707578 systemd[1]: sshd@18-10.0.0.6:22-10.0.0.1:53980.service: Deactivated successfully. Feb 13 20:41:11.710796 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:41:11.711456 systemd-logind[1423]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:41:11.712427 systemd-logind[1423]: Removed session 19. Feb 13 20:41:15.670813 kubelet[2429]: E0213 20:41:15.670775 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:16.714747 systemd[1]: Started sshd@19-10.0.0.6:22-10.0.0.1:40126.service - OpenSSH per-connection server daemon (10.0.0.1:40126). Feb 13 20:41:16.750605 sshd[2981]: Accepted publickey for core from 10.0.0.1 port 40126 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:16.751802 sshd[2981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:16.755131 systemd-logind[1423]: New session 20 of user core. Feb 13 20:41:16.771552 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:41:16.875524 sshd[2981]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:16.878608 systemd[1]: sshd@19-10.0.0.6:22-10.0.0.1:40126.service: Deactivated successfully. Feb 13 20:41:16.880130 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:41:16.881756 systemd-logind[1423]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:41:16.882676 systemd-logind[1423]: Removed session 20. Feb 13 20:41:18.670971 kubelet[2429]: E0213 20:41:18.670776 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:18.671490 kubelet[2429]: E0213 20:41:18.671449 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:41:21.891037 systemd[1]: Started sshd@20-10.0.0.6:22-10.0.0.1:40138.service - OpenSSH per-connection server daemon (10.0.0.1:40138). Feb 13 20:41:21.929434 sshd[2996]: Accepted publickey for core from 10.0.0.1 port 40138 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:21.930618 sshd[2996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:21.934313 systemd-logind[1423]: New session 21 of user core. Feb 13 20:41:21.943496 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:41:22.051006 sshd[2996]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:22.053462 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:41:22.054693 systemd[1]: sshd@20-10.0.0.6:22-10.0.0.1:40138.service: Deactivated successfully. Feb 13 20:41:22.057931 systemd-logind[1423]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:41:22.058831 systemd-logind[1423]: Removed session 21. Feb 13 20:41:27.061828 systemd[1]: Started sshd@21-10.0.0.6:22-10.0.0.1:52392.service - OpenSSH per-connection server daemon (10.0.0.1:52392). Feb 13 20:41:27.097567 sshd[3012]: Accepted publickey for core from 10.0.0.1 port 52392 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:27.098717 sshd[3012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:27.102280 systemd-logind[1423]: New session 22 of user core. Feb 13 20:41:27.108521 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:41:27.211914 sshd[3012]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:27.215139 systemd[1]: sshd@21-10.0.0.6:22-10.0.0.1:52392.service: Deactivated successfully. Feb 13 20:41:27.216720 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:41:27.217598 systemd-logind[1423]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:41:27.218693 systemd-logind[1423]: Removed session 22. Feb 13 20:41:32.222930 systemd[1]: Started sshd@22-10.0.0.6:22-10.0.0.1:52406.service - OpenSSH per-connection server daemon (10.0.0.1:52406). Feb 13 20:41:32.259098 sshd[3027]: Accepted publickey for core from 10.0.0.1 port 52406 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:32.260286 sshd[3027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:32.264042 systemd-logind[1423]: New session 23 of user core. Feb 13 20:41:32.270453 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:41:32.374505 sshd[3027]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:32.377771 systemd[1]: sshd@22-10.0.0.6:22-10.0.0.1:52406.service: Deactivated successfully. Feb 13 20:41:32.379384 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:41:32.381836 systemd-logind[1423]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:41:32.382844 systemd-logind[1423]: Removed session 23. Feb 13 20:41:33.670927 kubelet[2429]: E0213 20:41:33.670817 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:33.671522 kubelet[2429]: E0213 20:41:33.671491 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:41:37.384895 systemd[1]: Started sshd@23-10.0.0.6:22-10.0.0.1:38020.service - OpenSSH per-connection server daemon (10.0.0.1:38020). Feb 13 20:41:37.421129 sshd[3042]: Accepted publickey for core from 10.0.0.1 port 38020 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:37.422501 sshd[3042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:37.426330 systemd-logind[1423]: New session 24 of user core. Feb 13 20:41:37.436966 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:41:37.540568 sshd[3042]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:37.543731 systemd[1]: sshd@23-10.0.0.6:22-10.0.0.1:38020.service: Deactivated successfully. Feb 13 20:41:37.547384 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:41:37.547986 systemd-logind[1423]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:41:37.548852 systemd-logind[1423]: Removed session 24. Feb 13 20:41:37.703452 kubelet[2429]: E0213 20:41:37.703342 2429 kubelet_node_status.go:447] "Node not becoming ready in time after startup" Feb 13 20:41:37.734128 kubelet[2429]: E0213 20:41:37.734071 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:41:42.555005 systemd[1]: Started sshd@24-10.0.0.6:22-10.0.0.1:46654.service - OpenSSH per-connection server daemon (10.0.0.1:46654). Feb 13 20:41:42.592403 sshd[3060]: Accepted publickey for core from 10.0.0.1 port 46654 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:42.593539 sshd[3060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:42.597256 systemd-logind[1423]: New session 25 of user core. Feb 13 20:41:42.610460 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:41:42.714914 sshd[3060]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:42.717301 systemd[1]: sshd@24-10.0.0.6:22-10.0.0.1:46654.service: Deactivated successfully. Feb 13 20:41:42.719766 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:41:42.721068 systemd-logind[1423]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:41:42.722049 systemd-logind[1423]: Removed session 25. Feb 13 20:41:42.735187 kubelet[2429]: E0213 20:41:42.735130 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:41:47.670772 kubelet[2429]: E0213 20:41:47.670730 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:47.671969 kubelet[2429]: E0213 20:41:47.671545 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:41:47.728971 systemd[1]: Started sshd@25-10.0.0.6:22-10.0.0.1:46670.service - OpenSSH per-connection server daemon (10.0.0.1:46670). Feb 13 20:41:47.736626 kubelet[2429]: E0213 20:41:47.736518 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:41:47.764998 sshd[3077]: Accepted publickey for core from 10.0.0.1 port 46670 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:47.766180 sshd[3077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:47.769759 systemd-logind[1423]: New session 26 of user core. Feb 13 20:41:47.791447 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:41:47.899527 sshd[3077]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:47.902710 systemd[1]: sshd@25-10.0.0.6:22-10.0.0.1:46670.service: Deactivated successfully. Feb 13 20:41:47.904843 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:41:47.905733 systemd-logind[1423]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:41:47.907374 systemd-logind[1423]: Removed session 26. Feb 13 20:41:52.738291 kubelet[2429]: E0213 20:41:52.738237 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:41:52.915739 systemd[1]: Started sshd@26-10.0.0.6:22-10.0.0.1:56980.service - OpenSSH per-connection server daemon (10.0.0.1:56980). Feb 13 20:41:52.952133 sshd[3092]: Accepted publickey for core from 10.0.0.1 port 56980 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:52.953275 sshd[3092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:52.956728 systemd-logind[1423]: New session 27 of user core. Feb 13 20:41:52.961451 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:41:53.066001 sshd[3092]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:53.069123 systemd[1]: sshd@26-10.0.0.6:22-10.0.0.1:56980.service: Deactivated successfully. Feb 13 20:41:53.070749 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:41:53.071349 systemd-logind[1423]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:41:53.072295 systemd-logind[1423]: Removed session 27. Feb 13 20:41:57.739675 kubelet[2429]: E0213 20:41:57.739630 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:41:58.075800 systemd[1]: Started sshd@27-10.0.0.6:22-10.0.0.1:56986.service - OpenSSH per-connection server daemon (10.0.0.1:56986). Feb 13 20:41:58.111589 sshd[3107]: Accepted publickey for core from 10.0.0.1 port 56986 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:58.112802 sshd[3107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:58.116517 systemd-logind[1423]: New session 28 of user core. Feb 13 20:41:58.127440 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:41:58.233180 sshd[3107]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:58.236471 systemd[1]: sshd@27-10.0.0.6:22-10.0.0.1:56986.service: Deactivated successfully. Feb 13 20:41:58.239696 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:41:58.240334 systemd-logind[1423]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:41:58.241258 systemd-logind[1423]: Removed session 28. Feb 13 20:42:01.671047 kubelet[2429]: E0213 20:42:01.670778 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:01.671047 kubelet[2429]: E0213 20:42:01.670885 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:01.672147 kubelet[2429]: E0213 20:42:01.671551 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:42:02.741125 kubelet[2429]: E0213 20:42:02.741028 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:03.244986 systemd[1]: Started sshd@28-10.0.0.6:22-10.0.0.1:45022.service - OpenSSH per-connection server daemon (10.0.0.1:45022). Feb 13 20:42:03.281054 sshd[3124]: Accepted publickey for core from 10.0.0.1 port 45022 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:03.282231 sshd[3124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:03.286117 systemd-logind[1423]: New session 29 of user core. Feb 13 20:42:03.294445 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:42:03.398521 sshd[3124]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:03.402041 systemd[1]: sshd@28-10.0.0.6:22-10.0.0.1:45022.service: Deactivated successfully. Feb 13 20:42:03.403686 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:42:03.404377 systemd-logind[1423]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:42:03.405146 systemd-logind[1423]: Removed session 29. Feb 13 20:42:07.741642 kubelet[2429]: E0213 20:42:07.741593 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:08.408865 systemd[1]: Started sshd@29-10.0.0.6:22-10.0.0.1:45028.service - OpenSSH per-connection server daemon (10.0.0.1:45028). Feb 13 20:42:08.445128 sshd[3139]: Accepted publickey for core from 10.0.0.1 port 45028 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:08.446337 sshd[3139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:08.450635 systemd-logind[1423]: New session 30 of user core. Feb 13 20:42:08.465432 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 20:42:08.569514 sshd[3139]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:08.572832 systemd-logind[1423]: Session 30 logged out. Waiting for processes to exit. Feb 13 20:42:08.573162 systemd[1]: sshd@29-10.0.0.6:22-10.0.0.1:45028.service: Deactivated successfully. Feb 13 20:42:08.574789 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 20:42:08.575575 systemd-logind[1423]: Removed session 30. Feb 13 20:42:12.670325 kubelet[2429]: E0213 20:42:12.670270 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:12.671194 kubelet[2429]: E0213 20:42:12.670937 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:42:12.742939 kubelet[2429]: E0213 20:42:12.742901 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:13.580921 systemd[1]: Started sshd@30-10.0.0.6:22-10.0.0.1:54876.service - OpenSSH per-connection server daemon (10.0.0.1:54876). Feb 13 20:42:13.616750 sshd[3157]: Accepted publickey for core from 10.0.0.1 port 54876 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:13.617906 sshd[3157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:13.621526 systemd-logind[1423]: New session 31 of user core. Feb 13 20:42:13.635439 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 20:42:13.738876 sshd[3157]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:13.742111 systemd[1]: sshd@30-10.0.0.6:22-10.0.0.1:54876.service: Deactivated successfully. Feb 13 20:42:13.743797 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 20:42:13.744347 systemd-logind[1423]: Session 31 logged out. Waiting for processes to exit. Feb 13 20:42:13.745170 systemd-logind[1423]: Removed session 31. Feb 13 20:42:14.670480 kubelet[2429]: E0213 20:42:14.670441 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:17.743556 kubelet[2429]: E0213 20:42:17.743506 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:18.749891 systemd[1]: Started sshd@31-10.0.0.6:22-10.0.0.1:54878.service - OpenSSH per-connection server daemon (10.0.0.1:54878). Feb 13 20:42:18.785509 sshd[3175]: Accepted publickey for core from 10.0.0.1 port 54878 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:18.786637 sshd[3175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:18.789940 systemd-logind[1423]: New session 32 of user core. Feb 13 20:42:18.799441 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 20:42:18.903411 sshd[3175]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:18.906534 systemd[1]: sshd@31-10.0.0.6:22-10.0.0.1:54878.service: Deactivated successfully. Feb 13 20:42:18.908224 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 20:42:18.910878 systemd-logind[1423]: Session 32 logged out. Waiting for processes to exit. Feb 13 20:42:18.911802 systemd-logind[1423]: Removed session 32. Feb 13 20:42:22.744388 kubelet[2429]: E0213 20:42:22.744344 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:23.921833 systemd[1]: Started sshd@32-10.0.0.6:22-10.0.0.1:35142.service - OpenSSH per-connection server daemon (10.0.0.1:35142). Feb 13 20:42:23.957295 sshd[3190]: Accepted publickey for core from 10.0.0.1 port 35142 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:23.958486 sshd[3190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:23.962208 systemd-logind[1423]: New session 33 of user core. Feb 13 20:42:23.973447 systemd[1]: Started session-33.scope - Session 33 of User core. Feb 13 20:42:24.076972 sshd[3190]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:24.080225 systemd[1]: sshd@32-10.0.0.6:22-10.0.0.1:35142.service: Deactivated successfully. Feb 13 20:42:24.082722 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 20:42:24.083402 systemd-logind[1423]: Session 33 logged out. Waiting for processes to exit. Feb 13 20:42:24.084226 systemd-logind[1423]: Removed session 33. Feb 13 20:42:26.670609 kubelet[2429]: E0213 20:42:26.670571 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:26.671477 kubelet[2429]: E0213 20:42:26.671225 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:42:27.745128 kubelet[2429]: E0213 20:42:27.745089 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:29.087846 systemd[1]: Started sshd@33-10.0.0.6:22-10.0.0.1:35150.service - OpenSSH per-connection server daemon (10.0.0.1:35150). Feb 13 20:42:29.123440 sshd[3205]: Accepted publickey for core from 10.0.0.1 port 35150 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:29.124522 sshd[3205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:29.127799 systemd-logind[1423]: New session 34 of user core. Feb 13 20:42:29.141429 systemd[1]: Started session-34.scope - Session 34 of User core. Feb 13 20:42:29.246088 sshd[3205]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:29.248682 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 20:42:29.249943 systemd[1]: sshd@33-10.0.0.6:22-10.0.0.1:35150.service: Deactivated successfully. Feb 13 20:42:29.251742 systemd-logind[1423]: Session 34 logged out. Waiting for processes to exit. Feb 13 20:42:29.252559 systemd-logind[1423]: Removed session 34. Feb 13 20:42:32.747168 kubelet[2429]: E0213 20:42:32.747108 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:33.670820 kubelet[2429]: E0213 20:42:33.670789 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:34.256798 systemd[1]: Started sshd@34-10.0.0.6:22-10.0.0.1:38720.service - OpenSSH per-connection server daemon (10.0.0.1:38720). Feb 13 20:42:34.292832 sshd[3221]: Accepted publickey for core from 10.0.0.1 port 38720 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:34.293974 sshd[3221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:34.297683 systemd-logind[1423]: New session 35 of user core. Feb 13 20:42:34.311451 systemd[1]: Started session-35.scope - Session 35 of User core. Feb 13 20:42:34.414544 sshd[3221]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:34.417761 systemd[1]: sshd@34-10.0.0.6:22-10.0.0.1:38720.service: Deactivated successfully. Feb 13 20:42:34.420396 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 20:42:34.421088 systemd-logind[1423]: Session 35 logged out. Waiting for processes to exit. Feb 13 20:42:34.421882 systemd-logind[1423]: Removed session 35. Feb 13 20:42:37.671502 kubelet[2429]: E0213 20:42:37.671461 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:37.672965 containerd[1441]: time="2025-02-13T20:42:37.672910764Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:42:37.748441 kubelet[2429]: E0213 20:42:37.748412 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:38.781813 containerd[1441]: time="2025-02-13T20:42:38.781732887Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:42:38.782143 containerd[1441]: time="2025-02-13T20:42:38.781811567Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:42:38.782185 kubelet[2429]: E0213 20:42:38.781935 2429 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:42:38.782185 kubelet[2429]: E0213 20:42:38.781979 2429 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:42:38.782436 kubelet[2429]: E0213 20:42:38.782097 2429 kuberuntime_manager.go:1272] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78wb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-2bqj5_kube-flannel(40748397-e6cc-4e80-aa93-47714f7f3a4c): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:42:38.783402 kubelet[2429]: E0213 20:42:38.783361 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:42:39.425853 systemd[1]: Started sshd@35-10.0.0.6:22-10.0.0.1:38726.service - OpenSSH per-connection server daemon (10.0.0.1:38726). Feb 13 20:42:39.461695 sshd[3239]: Accepted publickey for core from 10.0.0.1 port 38726 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:39.462831 sshd[3239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:39.466045 systemd-logind[1423]: New session 36 of user core. Feb 13 20:42:39.473442 systemd[1]: Started session-36.scope - Session 36 of User core. Feb 13 20:42:39.575144 sshd[3239]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:39.578280 systemd[1]: sshd@35-10.0.0.6:22-10.0.0.1:38726.service: Deactivated successfully. Feb 13 20:42:39.579845 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 20:42:39.580494 systemd-logind[1423]: Session 36 logged out. Waiting for processes to exit. Feb 13 20:42:39.581291 systemd-logind[1423]: Removed session 36. Feb 13 20:42:42.750174 kubelet[2429]: E0213 20:42:42.750120 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:43.670514 kubelet[2429]: E0213 20:42:43.670405 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:44.585719 systemd[1]: Started sshd@36-10.0.0.6:22-10.0.0.1:33840.service - OpenSSH per-connection server daemon (10.0.0.1:33840). Feb 13 20:42:44.621745 sshd[3254]: Accepted publickey for core from 10.0.0.1 port 33840 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:44.622947 sshd[3254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:44.626359 systemd-logind[1423]: New session 37 of user core. Feb 13 20:42:44.640436 systemd[1]: Started session-37.scope - Session 37 of User core. Feb 13 20:42:44.742722 sshd[3254]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:44.745727 systemd[1]: sshd@36-10.0.0.6:22-10.0.0.1:33840.service: Deactivated successfully. Feb 13 20:42:44.748467 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 20:42:44.749144 systemd-logind[1423]: Session 37 logged out. Waiting for processes to exit. Feb 13 20:42:44.750021 systemd-logind[1423]: Removed session 37. Feb 13 20:42:47.750820 kubelet[2429]: E0213 20:42:47.750779 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:49.756836 systemd[1]: Started sshd@37-10.0.0.6:22-10.0.0.1:33852.service - OpenSSH per-connection server daemon (10.0.0.1:33852). Feb 13 20:42:49.792508 sshd[3271]: Accepted publickey for core from 10.0.0.1 port 33852 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:49.793629 sshd[3271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:49.797774 systemd-logind[1423]: New session 38 of user core. Feb 13 20:42:49.810451 systemd[1]: Started session-38.scope - Session 38 of User core. Feb 13 20:42:49.918498 sshd[3271]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:49.921032 systemd[1]: sshd@37-10.0.0.6:22-10.0.0.1:33852.service: Deactivated successfully. Feb 13 20:42:49.922639 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 20:42:49.923935 systemd-logind[1423]: Session 38 logged out. Waiting for processes to exit. Feb 13 20:42:49.924923 systemd-logind[1423]: Removed session 38. Feb 13 20:42:51.670593 kubelet[2429]: E0213 20:42:51.670515 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:51.671072 kubelet[2429]: E0213 20:42:51.671016 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:42:52.752395 kubelet[2429]: E0213 20:42:52.752332 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:54.929168 systemd[1]: Started sshd@38-10.0.0.6:22-10.0.0.1:54008.service - OpenSSH per-connection server daemon (10.0.0.1:54008). Feb 13 20:42:54.964778 sshd[3286]: Accepted publickey for core from 10.0.0.1 port 54008 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:54.965925 sshd[3286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:54.969282 systemd-logind[1423]: New session 39 of user core. Feb 13 20:42:54.979457 systemd[1]: Started session-39.scope - Session 39 of User core. Feb 13 20:42:55.081209 sshd[3286]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:55.084350 systemd[1]: sshd@38-10.0.0.6:22-10.0.0.1:54008.service: Deactivated successfully. Feb 13 20:42:55.086666 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 20:42:55.087212 systemd-logind[1423]: Session 39 logged out. Waiting for processes to exit. Feb 13 20:42:55.087972 systemd-logind[1423]: Removed session 39. Feb 13 20:42:57.753706 kubelet[2429]: E0213 20:42:57.753645 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:00.093018 systemd[1]: Started sshd@39-10.0.0.6:22-10.0.0.1:54018.service - OpenSSH per-connection server daemon (10.0.0.1:54018). Feb 13 20:43:00.128677 sshd[3301]: Accepted publickey for core from 10.0.0.1 port 54018 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:00.129801 sshd[3301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:00.133361 systemd-logind[1423]: New session 40 of user core. Feb 13 20:43:00.141515 systemd[1]: Started session-40.scope - Session 40 of User core. Feb 13 20:43:00.245121 sshd[3301]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:00.248154 systemd[1]: sshd@39-10.0.0.6:22-10.0.0.1:54018.service: Deactivated successfully. Feb 13 20:43:00.249951 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 20:43:00.250548 systemd-logind[1423]: Session 40 logged out. Waiting for processes to exit. Feb 13 20:43:00.251302 systemd-logind[1423]: Removed session 40. Feb 13 20:43:02.755256 kubelet[2429]: E0213 20:43:02.755219 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:05.255817 systemd[1]: Started sshd@40-10.0.0.6:22-10.0.0.1:47792.service - OpenSSH per-connection server daemon (10.0.0.1:47792). Feb 13 20:43:05.291840 sshd[3316]: Accepted publickey for core from 10.0.0.1 port 47792 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:05.292981 sshd[3316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:05.296923 systemd-logind[1423]: New session 41 of user core. Feb 13 20:43:05.302455 systemd[1]: Started session-41.scope - Session 41 of User core. Feb 13 20:43:05.407738 sshd[3316]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:05.419813 systemd[1]: sshd@40-10.0.0.6:22-10.0.0.1:47792.service: Deactivated successfully. Feb 13 20:43:05.422492 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 20:43:05.424045 systemd-logind[1423]: Session 41 logged out. Waiting for processes to exit. Feb 13 20:43:05.432707 systemd[1]: Started sshd@41-10.0.0.6:22-10.0.0.1:47802.service - OpenSSH per-connection server daemon (10.0.0.1:47802). Feb 13 20:43:05.433641 systemd-logind[1423]: Removed session 41. Feb 13 20:43:05.465302 sshd[3332]: Accepted publickey for core from 10.0.0.1 port 47802 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:05.466517 sshd[3332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:05.470437 systemd-logind[1423]: New session 42 of user core. Feb 13 20:43:05.476437 systemd[1]: Started session-42.scope - Session 42 of User core. Feb 13 20:43:05.608974 sshd[3332]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:05.620143 systemd[1]: sshd@41-10.0.0.6:22-10.0.0.1:47802.service: Deactivated successfully. Feb 13 20:43:05.624921 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 20:43:05.626409 systemd-logind[1423]: Session 42 logged out. Waiting for processes to exit. Feb 13 20:43:05.632565 systemd[1]: Started sshd@42-10.0.0.6:22-10.0.0.1:47812.service - OpenSSH per-connection server daemon (10.0.0.1:47812). Feb 13 20:43:05.635658 systemd-logind[1423]: Removed session 42. Feb 13 20:43:05.665192 sshd[3344]: Accepted publickey for core from 10.0.0.1 port 47812 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:05.666465 sshd[3344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:05.670647 kubelet[2429]: E0213 20:43:05.670348 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:05.671236 systemd-logind[1423]: New session 43 of user core. Feb 13 20:43:05.671984 kubelet[2429]: E0213 20:43:05.671509 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:43:05.678509 systemd[1]: Started session-43.scope - Session 43 of User core. Feb 13 20:43:05.784848 sshd[3344]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:05.787957 systemd[1]: sshd@42-10.0.0.6:22-10.0.0.1:47812.service: Deactivated successfully. Feb 13 20:43:05.789741 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 20:43:05.790536 systemd-logind[1423]: Session 43 logged out. Waiting for processes to exit. Feb 13 20:43:05.791446 systemd-logind[1423]: Removed session 43. Feb 13 20:43:07.756123 kubelet[2429]: E0213 20:43:07.756076 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:10.798805 systemd[1]: Started sshd@43-10.0.0.6:22-10.0.0.1:47816.service - OpenSSH per-connection server daemon (10.0.0.1:47816). Feb 13 20:43:10.835535 sshd[3358]: Accepted publickey for core from 10.0.0.1 port 47816 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:10.836643 sshd[3358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:10.839911 systemd-logind[1423]: New session 44 of user core. Feb 13 20:43:10.851437 systemd[1]: Started session-44.scope - Session 44 of User core. Feb 13 20:43:10.957412 sshd[3358]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:10.960632 systemd[1]: sshd@43-10.0.0.6:22-10.0.0.1:47816.service: Deactivated successfully. Feb 13 20:43:10.962323 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 20:43:10.963535 systemd-logind[1423]: Session 44 logged out. Waiting for processes to exit. Feb 13 20:43:10.964315 systemd-logind[1423]: Removed session 44. Feb 13 20:43:12.757272 kubelet[2429]: E0213 20:43:12.757215 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:15.967790 systemd[1]: Started sshd@44-10.0.0.6:22-10.0.0.1:34788.service - OpenSSH per-connection server daemon (10.0.0.1:34788). Feb 13 20:43:16.003542 sshd[3374]: Accepted publickey for core from 10.0.0.1 port 34788 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:16.004687 sshd[3374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:16.007912 systemd-logind[1423]: New session 45 of user core. Feb 13 20:43:16.016436 systemd[1]: Started session-45.scope - Session 45 of User core. Feb 13 20:43:16.118218 sshd[3374]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:16.121825 systemd[1]: sshd@44-10.0.0.6:22-10.0.0.1:34788.service: Deactivated successfully. Feb 13 20:43:16.123427 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 20:43:16.123959 systemd-logind[1423]: Session 45 logged out. Waiting for processes to exit. Feb 13 20:43:16.124690 systemd-logind[1423]: Removed session 45. Feb 13 20:43:17.670928 kubelet[2429]: E0213 20:43:17.670898 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:17.671722 kubelet[2429]: E0213 20:43:17.671694 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:43:17.758031 kubelet[2429]: E0213 20:43:17.757995 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:21.128821 systemd[1]: Started sshd@45-10.0.0.6:22-10.0.0.1:34804.service - OpenSSH per-connection server daemon (10.0.0.1:34804). Feb 13 20:43:21.164885 sshd[3388]: Accepted publickey for core from 10.0.0.1 port 34804 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:21.166037 sshd[3388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:21.169528 systemd-logind[1423]: New session 46 of user core. Feb 13 20:43:21.180444 systemd[1]: Started session-46.scope - Session 46 of User core. Feb 13 20:43:21.282551 sshd[3388]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:21.285821 systemd[1]: sshd@45-10.0.0.6:22-10.0.0.1:34804.service: Deactivated successfully. Feb 13 20:43:21.287371 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 20:43:21.287976 systemd-logind[1423]: Session 46 logged out. Waiting for processes to exit. Feb 13 20:43:21.288893 systemd-logind[1423]: Removed session 46. Feb 13 20:43:22.758977 kubelet[2429]: E0213 20:43:22.758937 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:23.670790 kubelet[2429]: E0213 20:43:23.670745 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:26.292868 systemd[1]: Started sshd@46-10.0.0.6:22-10.0.0.1:36182.service - OpenSSH per-connection server daemon (10.0.0.1:36182). Feb 13 20:43:26.328603 sshd[3402]: Accepted publickey for core from 10.0.0.1 port 36182 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:26.329734 sshd[3402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:26.333609 systemd-logind[1423]: New session 47 of user core. Feb 13 20:43:26.345555 systemd[1]: Started session-47.scope - Session 47 of User core. Feb 13 20:43:26.449703 sshd[3402]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:26.452857 systemd[1]: sshd@46-10.0.0.6:22-10.0.0.1:36182.service: Deactivated successfully. Feb 13 20:43:26.454442 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 20:43:26.455839 systemd-logind[1423]: Session 47 logged out. Waiting for processes to exit. Feb 13 20:43:26.456816 systemd-logind[1423]: Removed session 47. Feb 13 20:43:27.760446 kubelet[2429]: E0213 20:43:27.760365 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:28.670600 kubelet[2429]: E0213 20:43:28.670541 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:28.671251 kubelet[2429]: E0213 20:43:28.671208 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:43:31.463890 systemd[1]: Started sshd@47-10.0.0.6:22-10.0.0.1:36186.service - OpenSSH per-connection server daemon (10.0.0.1:36186). Feb 13 20:43:31.500038 sshd[3416]: Accepted publickey for core from 10.0.0.1 port 36186 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:31.501193 sshd[3416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:31.504837 systemd-logind[1423]: New session 48 of user core. Feb 13 20:43:31.514515 systemd[1]: Started session-48.scope - Session 48 of User core. Feb 13 20:43:31.619283 sshd[3416]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:31.622620 systemd[1]: sshd@47-10.0.0.6:22-10.0.0.1:36186.service: Deactivated successfully. Feb 13 20:43:31.624503 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 20:43:31.625166 systemd-logind[1423]: Session 48 logged out. Waiting for processes to exit. Feb 13 20:43:31.625903 systemd-logind[1423]: Removed session 48. Feb 13 20:43:32.761606 kubelet[2429]: E0213 20:43:32.761529 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:36.629827 systemd[1]: Started sshd@48-10.0.0.6:22-10.0.0.1:50398.service - OpenSSH per-connection server daemon (10.0.0.1:50398). Feb 13 20:43:36.665544 sshd[3430]: Accepted publickey for core from 10.0.0.1 port 50398 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:36.666686 sshd[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:36.670372 systemd-logind[1423]: New session 49 of user core. Feb 13 20:43:36.685489 systemd[1]: Started session-49.scope - Session 49 of User core. Feb 13 20:43:36.788550 sshd[3430]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:36.791899 systemd[1]: sshd@48-10.0.0.6:22-10.0.0.1:50398.service: Deactivated successfully. Feb 13 20:43:36.793470 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 20:43:36.793996 systemd-logind[1423]: Session 49 logged out. Waiting for processes to exit. Feb 13 20:43:36.794730 systemd-logind[1423]: Removed session 49. Feb 13 20:43:37.762861 kubelet[2429]: E0213 20:43:37.762824 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:39.670337 kubelet[2429]: E0213 20:43:39.670254 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:39.670952 kubelet[2429]: E0213 20:43:39.670910 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:43:40.670790 kubelet[2429]: E0213 20:43:40.670752 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:41.670183 kubelet[2429]: E0213 20:43:41.670055 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:41.803800 systemd[1]: Started sshd@49-10.0.0.6:22-10.0.0.1:50400.service - OpenSSH per-connection server daemon (10.0.0.1:50400). Feb 13 20:43:41.839755 sshd[3446]: Accepted publickey for core from 10.0.0.1 port 50400 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:41.840945 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:41.844847 systemd-logind[1423]: New session 50 of user core. Feb 13 20:43:41.854439 systemd[1]: Started session-50.scope - Session 50 of User core. Feb 13 20:43:41.962761 sshd[3446]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:41.966169 systemd[1]: sshd@49-10.0.0.6:22-10.0.0.1:50400.service: Deactivated successfully. Feb 13 20:43:41.968965 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 20:43:41.969635 systemd-logind[1423]: Session 50 logged out. Waiting for processes to exit. Feb 13 20:43:41.970514 systemd-logind[1423]: Removed session 50. Feb 13 20:43:42.763743 kubelet[2429]: E0213 20:43:42.763686 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:46.974822 systemd[1]: Started sshd@50-10.0.0.6:22-10.0.0.1:51834.service - OpenSSH per-connection server daemon (10.0.0.1:51834). Feb 13 20:43:47.010394 sshd[3464]: Accepted publickey for core from 10.0.0.1 port 51834 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:47.011517 sshd[3464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:47.015218 systemd-logind[1423]: New session 51 of user core. Feb 13 20:43:47.026459 systemd[1]: Started session-51.scope - Session 51 of User core. Feb 13 20:43:47.131016 sshd[3464]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:47.134213 systemd[1]: sshd@50-10.0.0.6:22-10.0.0.1:51834.service: Deactivated successfully. Feb 13 20:43:47.136433 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 20:43:47.137356 systemd-logind[1423]: Session 51 logged out. Waiting for processes to exit. Feb 13 20:43:47.138332 systemd-logind[1423]: Removed session 51. Feb 13 20:43:47.764440 kubelet[2429]: E0213 20:43:47.764394 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:50.670145 kubelet[2429]: E0213 20:43:50.670113 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:50.670904 kubelet[2429]: E0213 20:43:50.670795 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:43:52.142043 systemd[1]: Started sshd@51-10.0.0.6:22-10.0.0.1:51848.service - OpenSSH per-connection server daemon (10.0.0.1:51848). Feb 13 20:43:52.178060 sshd[3479]: Accepted publickey for core from 10.0.0.1 port 51848 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:52.179161 sshd[3479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:52.182751 systemd-logind[1423]: New session 52 of user core. Feb 13 20:43:52.190490 systemd[1]: Started session-52.scope - Session 52 of User core. Feb 13 20:43:52.294507 sshd[3479]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:52.297909 systemd[1]: sshd@51-10.0.0.6:22-10.0.0.1:51848.service: Deactivated successfully. Feb 13 20:43:52.299578 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 20:43:52.300233 systemd-logind[1423]: Session 52 logged out. Waiting for processes to exit. Feb 13 20:43:52.301447 systemd-logind[1423]: Removed session 52. Feb 13 20:43:52.765401 kubelet[2429]: E0213 20:43:52.765362 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:57.304900 systemd[1]: Started sshd@52-10.0.0.6:22-10.0.0.1:53510.service - OpenSSH per-connection server daemon (10.0.0.1:53510). Feb 13 20:43:57.340524 sshd[3494]: Accepted publickey for core from 10.0.0.1 port 53510 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:57.341649 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:57.345375 systemd-logind[1423]: New session 53 of user core. Feb 13 20:43:57.356430 systemd[1]: Started session-53.scope - Session 53 of User core. Feb 13 20:43:57.458641 sshd[3494]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:57.461775 systemd[1]: sshd@52-10.0.0.6:22-10.0.0.1:53510.service: Deactivated successfully. Feb 13 20:43:57.463373 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 20:43:57.463884 systemd-logind[1423]: Session 53 logged out. Waiting for processes to exit. Feb 13 20:43:57.464607 systemd-logind[1423]: Removed session 53. Feb 13 20:43:57.766557 kubelet[2429]: E0213 20:43:57.766446 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:02.469735 systemd[1]: Started sshd@53-10.0.0.6:22-10.0.0.1:42076.service - OpenSSH per-connection server daemon (10.0.0.1:42076). Feb 13 20:44:02.505690 sshd[3510]: Accepted publickey for core from 10.0.0.1 port 42076 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:02.506894 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:02.510630 systemd-logind[1423]: New session 54 of user core. Feb 13 20:44:02.521442 systemd[1]: Started session-54.scope - Session 54 of User core. Feb 13 20:44:02.623524 sshd[3510]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:02.626776 systemd[1]: sshd@53-10.0.0.6:22-10.0.0.1:42076.service: Deactivated successfully. Feb 13 20:44:02.628433 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 20:44:02.630028 systemd-logind[1423]: Session 54 logged out. Waiting for processes to exit. Feb 13 20:44:02.631275 systemd-logind[1423]: Removed session 54. Feb 13 20:44:02.767410 kubelet[2429]: E0213 20:44:02.767288 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:04.670191 kubelet[2429]: E0213 20:44:04.670150 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:04.670916 kubelet[2429]: E0213 20:44:04.670700 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:44:05.670239 kubelet[2429]: E0213 20:44:05.670201 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:07.634214 systemd[1]: Started sshd@54-10.0.0.6:22-10.0.0.1:42082.service - OpenSSH per-connection server daemon (10.0.0.1:42082). Feb 13 20:44:07.670387 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 42082 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:07.672714 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:07.677437 systemd-logind[1423]: New session 55 of user core. Feb 13 20:44:07.692504 systemd[1]: Started session-55.scope - Session 55 of User core. Feb 13 20:44:07.768184 kubelet[2429]: E0213 20:44:07.768141 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:07.795094 sshd[3525]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:07.798233 systemd[1]: sshd@54-10.0.0.6:22-10.0.0.1:42082.service: Deactivated successfully. Feb 13 20:44:07.800525 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 20:44:07.802504 systemd-logind[1423]: Session 55 logged out. Waiting for processes to exit. Feb 13 20:44:07.803423 systemd-logind[1423]: Removed session 55. Feb 13 20:44:12.769106 kubelet[2429]: E0213 20:44:12.769057 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:12.805788 systemd[1]: Started sshd@55-10.0.0.6:22-10.0.0.1:36288.service - OpenSSH per-connection server daemon (10.0.0.1:36288). Feb 13 20:44:12.841880 sshd[3539]: Accepted publickey for core from 10.0.0.1 port 36288 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:12.842977 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:12.846815 systemd-logind[1423]: New session 56 of user core. Feb 13 20:44:12.858437 systemd[1]: Started session-56.scope - Session 56 of User core. Feb 13 20:44:12.960251 sshd[3539]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:12.963612 systemd[1]: sshd@55-10.0.0.6:22-10.0.0.1:36288.service: Deactivated successfully. Feb 13 20:44:12.966110 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 20:44:12.967032 systemd-logind[1423]: Session 56 logged out. Waiting for processes to exit. Feb 13 20:44:12.967881 systemd-logind[1423]: Removed session 56. Feb 13 20:44:16.670786 kubelet[2429]: E0213 20:44:16.670740 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:16.671590 kubelet[2429]: E0213 20:44:16.671566 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:44:17.770456 kubelet[2429]: E0213 20:44:17.770425 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:17.970739 systemd[1]: Started sshd@56-10.0.0.6:22-10.0.0.1:36302.service - OpenSSH per-connection server daemon (10.0.0.1:36302). Feb 13 20:44:18.007268 sshd[3556]: Accepted publickey for core from 10.0.0.1 port 36302 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:18.008485 sshd[3556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:18.012084 systemd-logind[1423]: New session 57 of user core. Feb 13 20:44:18.018430 systemd[1]: Started session-57.scope - Session 57 of User core. Feb 13 20:44:18.119716 sshd[3556]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:18.122728 systemd[1]: sshd@56-10.0.0.6:22-10.0.0.1:36302.service: Deactivated successfully. Feb 13 20:44:18.124454 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 20:44:18.124996 systemd-logind[1423]: Session 57 logged out. Waiting for processes to exit. Feb 13 20:44:18.125638 systemd-logind[1423]: Removed session 57. Feb 13 20:44:22.772063 kubelet[2429]: E0213 20:44:22.772027 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:23.130762 systemd[1]: Started sshd@57-10.0.0.6:22-10.0.0.1:39898.service - OpenSSH per-connection server daemon (10.0.0.1:39898). Feb 13 20:44:23.166529 sshd[3571]: Accepted publickey for core from 10.0.0.1 port 39898 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:23.167692 sshd[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:23.171482 systemd-logind[1423]: New session 58 of user core. Feb 13 20:44:23.177428 systemd[1]: Started session-58.scope - Session 58 of User core. Feb 13 20:44:23.278161 sshd[3571]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:23.281454 systemd[1]: sshd@57-10.0.0.6:22-10.0.0.1:39898.service: Deactivated successfully. Feb 13 20:44:23.283097 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 20:44:23.283658 systemd-logind[1423]: Session 58 logged out. Waiting for processes to exit. Feb 13 20:44:23.284509 systemd-logind[1423]: Removed session 58. Feb 13 20:44:27.670863 kubelet[2429]: E0213 20:44:27.670820 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:27.671988 kubelet[2429]: E0213 20:44:27.671601 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:44:27.773696 kubelet[2429]: E0213 20:44:27.773618 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:28.289880 systemd[1]: Started sshd@58-10.0.0.6:22-10.0.0.1:39900.service - OpenSSH per-connection server daemon (10.0.0.1:39900). Feb 13 20:44:28.325412 sshd[3586]: Accepted publickey for core from 10.0.0.1 port 39900 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:28.326603 sshd[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:28.330041 systemd-logind[1423]: New session 59 of user core. Feb 13 20:44:28.334500 systemd[1]: Started session-59.scope - Session 59 of User core. Feb 13 20:44:28.435849 sshd[3586]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:28.439035 systemd[1]: sshd@58-10.0.0.6:22-10.0.0.1:39900.service: Deactivated successfully. Feb 13 20:44:28.440700 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 20:44:28.441819 systemd-logind[1423]: Session 59 logged out. Waiting for processes to exit. Feb 13 20:44:28.442557 systemd-logind[1423]: Removed session 59. Feb 13 20:44:32.774495 kubelet[2429]: E0213 20:44:32.774460 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:33.446785 systemd[1]: Started sshd@59-10.0.0.6:22-10.0.0.1:34402.service - OpenSSH per-connection server daemon (10.0.0.1:34402). Feb 13 20:44:33.483497 sshd[3600]: Accepted publickey for core from 10.0.0.1 port 34402 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:33.484717 sshd[3600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:33.487959 systemd-logind[1423]: New session 60 of user core. Feb 13 20:44:33.496435 systemd[1]: Started session-60.scope - Session 60 of User core. Feb 13 20:44:33.598182 sshd[3600]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:33.601243 systemd[1]: sshd@59-10.0.0.6:22-10.0.0.1:34402.service: Deactivated successfully. Feb 13 20:44:33.603005 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 20:44:33.603613 systemd-logind[1423]: Session 60 logged out. Waiting for processes to exit. Feb 13 20:44:33.604453 systemd-logind[1423]: Removed session 60. Feb 13 20:44:37.775703 kubelet[2429]: E0213 20:44:37.775654 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:38.611780 systemd[1]: Started sshd@60-10.0.0.6:22-10.0.0.1:34416.service - OpenSSH per-connection server daemon (10.0.0.1:34416). Feb 13 20:44:38.647627 sshd[3618]: Accepted publickey for core from 10.0.0.1 port 34416 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:38.648902 sshd[3618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:38.652928 systemd-logind[1423]: New session 61 of user core. Feb 13 20:44:38.663430 systemd[1]: Started session-61.scope - Session 61 of User core. Feb 13 20:44:38.767249 sshd[3618]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:38.770454 systemd[1]: sshd@60-10.0.0.6:22-10.0.0.1:34416.service: Deactivated successfully. Feb 13 20:44:38.772140 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 20:44:38.772830 systemd-logind[1423]: Session 61 logged out. Waiting for processes to exit. Feb 13 20:44:38.773676 systemd-logind[1423]: Removed session 61. Feb 13 20:44:39.670480 kubelet[2429]: E0213 20:44:39.670392 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:42.670143 kubelet[2429]: E0213 20:44:42.670101 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:42.670789 kubelet[2429]: E0213 20:44:42.670746 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:44:42.776806 kubelet[2429]: E0213 20:44:42.776769 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:43.777819 systemd[1]: Started sshd@61-10.0.0.6:22-10.0.0.1:43564.service - OpenSSH per-connection server daemon (10.0.0.1:43564). Feb 13 20:44:43.814128 sshd[3632]: Accepted publickey for core from 10.0.0.1 port 43564 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:43.815278 sshd[3632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:43.819344 systemd-logind[1423]: New session 62 of user core. Feb 13 20:44:43.829446 systemd[1]: Started session-62.scope - Session 62 of User core. Feb 13 20:44:43.933784 sshd[3632]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:43.937037 systemd[1]: sshd@61-10.0.0.6:22-10.0.0.1:43564.service: Deactivated successfully. Feb 13 20:44:43.939414 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 20:44:43.940121 systemd-logind[1423]: Session 62 logged out. Waiting for processes to exit. Feb 13 20:44:43.941153 systemd-logind[1423]: Removed session 62. Feb 13 20:44:47.777990 kubelet[2429]: E0213 20:44:47.777935 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:48.944848 systemd[1]: Started sshd@62-10.0.0.6:22-10.0.0.1:43566.service - OpenSSH per-connection server daemon (10.0.0.1:43566). Feb 13 20:44:48.980311 sshd[3649]: Accepted publickey for core from 10.0.0.1 port 43566 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:48.981444 sshd[3649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:48.985291 systemd-logind[1423]: New session 63 of user core. Feb 13 20:44:48.995509 systemd[1]: Started session-63.scope - Session 63 of User core. Feb 13 20:44:49.096748 sshd[3649]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:49.099841 systemd[1]: sshd@62-10.0.0.6:22-10.0.0.1:43566.service: Deactivated successfully. Feb 13 20:44:49.101482 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 20:44:49.102777 systemd-logind[1423]: Session 63 logged out. Waiting for processes to exit. Feb 13 20:44:49.103598 systemd-logind[1423]: Removed session 63. Feb 13 20:44:51.670887 kubelet[2429]: E0213 20:44:51.670795 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:52.779427 kubelet[2429]: E0213 20:44:52.779350 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:54.108776 systemd[1]: Started sshd@63-10.0.0.6:22-10.0.0.1:55782.service - OpenSSH per-connection server daemon (10.0.0.1:55782). Feb 13 20:44:54.144821 sshd[3666]: Accepted publickey for core from 10.0.0.1 port 55782 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:54.145981 sshd[3666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:54.150685 systemd-logind[1423]: New session 64 of user core. Feb 13 20:44:54.161456 systemd[1]: Started session-64.scope - Session 64 of User core. Feb 13 20:44:54.266042 sshd[3666]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:54.269287 systemd[1]: sshd@63-10.0.0.6:22-10.0.0.1:55782.service: Deactivated successfully. Feb 13 20:44:54.271126 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 20:44:54.272740 systemd-logind[1423]: Session 64 logged out. Waiting for processes to exit. Feb 13 20:44:54.273676 systemd-logind[1423]: Removed session 64. Feb 13 20:44:57.670166 kubelet[2429]: E0213 20:44:57.670107 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:57.671503 kubelet[2429]: E0213 20:44:57.671401 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:44:57.780006 kubelet[2429]: E0213 20:44:57.779972 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:59.276900 systemd[1]: Started sshd@64-10.0.0.6:22-10.0.0.1:55788.service - OpenSSH per-connection server daemon (10.0.0.1:55788). Feb 13 20:44:59.313070 sshd[3681]: Accepted publickey for core from 10.0.0.1 port 55788 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:59.315047 sshd[3681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:59.318353 systemd-logind[1423]: New session 65 of user core. Feb 13 20:44:59.329445 systemd[1]: Started session-65.scope - Session 65 of User core. Feb 13 20:44:59.432191 sshd[3681]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:59.435286 systemd[1]: sshd@64-10.0.0.6:22-10.0.0.1:55788.service: Deactivated successfully. Feb 13 20:44:59.436999 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 20:44:59.437565 systemd-logind[1423]: Session 65 logged out. Waiting for processes to exit. Feb 13 20:44:59.438352 systemd-logind[1423]: Removed session 65. Feb 13 20:45:02.781139 kubelet[2429]: E0213 20:45:02.781058 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:04.443894 systemd[1]: Started sshd@65-10.0.0.6:22-10.0.0.1:39502.service - OpenSSH per-connection server daemon (10.0.0.1:39502). Feb 13 20:45:04.480468 sshd[3695]: Accepted publickey for core from 10.0.0.1 port 39502 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:04.481589 sshd[3695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:04.485147 systemd-logind[1423]: New session 66 of user core. Feb 13 20:45:04.493435 systemd[1]: Started session-66.scope - Session 66 of User core. Feb 13 20:45:04.595421 sshd[3695]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:04.597880 systemd[1]: sshd@65-10.0.0.6:22-10.0.0.1:39502.service: Deactivated successfully. Feb 13 20:45:04.599422 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 20:45:04.600669 systemd-logind[1423]: Session 66 logged out. Waiting for processes to exit. Feb 13 20:45:04.602054 systemd-logind[1423]: Removed session 66. Feb 13 20:45:07.781803 kubelet[2429]: E0213 20:45:07.781740 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:09.609764 systemd[1]: Started sshd@66-10.0.0.6:22-10.0.0.1:39504.service - OpenSSH per-connection server daemon (10.0.0.1:39504). Feb 13 20:45:09.645313 sshd[3709]: Accepted publickey for core from 10.0.0.1 port 39504 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:09.646441 sshd[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:09.649979 systemd-logind[1423]: New session 67 of user core. Feb 13 20:45:09.657469 systemd[1]: Started session-67.scope - Session 67 of User core. Feb 13 20:45:09.670470 kubelet[2429]: E0213 20:45:09.670445 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:09.760271 sshd[3709]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:09.763402 systemd[1]: sshd@66-10.0.0.6:22-10.0.0.1:39504.service: Deactivated successfully. Feb 13 20:45:09.765039 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 20:45:09.767542 systemd-logind[1423]: Session 67 logged out. Waiting for processes to exit. Feb 13 20:45:09.768432 systemd-logind[1423]: Removed session 67. Feb 13 20:45:11.671253 kubelet[2429]: E0213 20:45:11.671067 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:11.671775 kubelet[2429]: E0213 20:45:11.671731 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:45:12.782891 kubelet[2429]: E0213 20:45:12.782839 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:14.771093 systemd[1]: Started sshd@67-10.0.0.6:22-10.0.0.1:60034.service - OpenSSH per-connection server daemon (10.0.0.1:60034). Feb 13 20:45:14.806910 sshd[3724]: Accepted publickey for core from 10.0.0.1 port 60034 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:14.808032 sshd[3724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:14.811432 systemd-logind[1423]: New session 68 of user core. Feb 13 20:45:14.817423 systemd[1]: Started session-68.scope - Session 68 of User core. Feb 13 20:45:14.920849 sshd[3724]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:14.924125 systemd[1]: sshd@67-10.0.0.6:22-10.0.0.1:60034.service: Deactivated successfully. Feb 13 20:45:14.926676 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 20:45:14.927319 systemd-logind[1423]: Session 68 logged out. Waiting for processes to exit. Feb 13 20:45:14.928083 systemd-logind[1423]: Removed session 68. Feb 13 20:45:17.783653 kubelet[2429]: E0213 20:45:17.783613 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:19.934822 systemd[1]: Started sshd@68-10.0.0.6:22-10.0.0.1:60038.service - OpenSSH per-connection server daemon (10.0.0.1:60038). Feb 13 20:45:19.970380 sshd[3740]: Accepted publickey for core from 10.0.0.1 port 60038 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:19.971543 sshd[3740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:19.974840 systemd-logind[1423]: New session 69 of user core. Feb 13 20:45:19.985498 systemd[1]: Started session-69.scope - Session 69 of User core. Feb 13 20:45:20.087643 sshd[3740]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:20.090811 systemd[1]: sshd@68-10.0.0.6:22-10.0.0.1:60038.service: Deactivated successfully. Feb 13 20:45:20.092777 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 20:45:20.093339 systemd-logind[1423]: Session 69 logged out. Waiting for processes to exit. Feb 13 20:45:20.094388 systemd-logind[1423]: Removed session 69. Feb 13 20:45:22.670190 kubelet[2429]: E0213 20:45:22.670137 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:22.784592 kubelet[2429]: E0213 20:45:22.784540 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:25.097701 systemd[1]: Started sshd@69-10.0.0.6:22-10.0.0.1:55174.service - OpenSSH per-connection server daemon (10.0.0.1:55174). Feb 13 20:45:25.133740 sshd[3755]: Accepted publickey for core from 10.0.0.1 port 55174 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:25.134852 sshd[3755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:25.138402 systemd-logind[1423]: New session 70 of user core. Feb 13 20:45:25.144433 systemd[1]: Started session-70.scope - Session 70 of User core. Feb 13 20:45:25.245733 sshd[3755]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:25.248883 systemd[1]: sshd@69-10.0.0.6:22-10.0.0.1:55174.service: Deactivated successfully. Feb 13 20:45:25.251534 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 20:45:25.252460 systemd-logind[1423]: Session 70 logged out. Waiting for processes to exit. Feb 13 20:45:25.253507 systemd-logind[1423]: Removed session 70. Feb 13 20:45:26.670478 kubelet[2429]: E0213 20:45:26.670428 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:26.671556 containerd[1441]: time="2025-02-13T20:45:26.671500296Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:45:27.783794 containerd[1441]: time="2025-02-13T20:45:27.783694425Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:45:27.784178 containerd[1441]: time="2025-02-13T20:45:27.783787306Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:45:27.784216 kubelet[2429]: E0213 20:45:27.783898 2429 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:45:27.784216 kubelet[2429]: E0213 20:45:27.783937 2429 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:45:27.784461 kubelet[2429]: E0213 20:45:27.784014 2429 kuberuntime_manager.go:1272] "Unhandled Error" err="init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78wb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-2bqj5_kube-flannel(40748397-e6cc-4e80-aa93-47714f7f3a4c): ErrImagePull: failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError" Feb 13 20:45:27.785113 kubelet[2429]: E0213 20:45:27.785082 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:45:27.785349 kubelet[2429]: E0213 20:45:27.785328 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:30.256772 systemd[1]: Started sshd@70-10.0.0.6:22-10.0.0.1:55178.service - OpenSSH per-connection server daemon (10.0.0.1:55178). Feb 13 20:45:30.292367 sshd[3770]: Accepted publickey for core from 10.0.0.1 port 55178 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:30.293435 sshd[3770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:30.296534 systemd-logind[1423]: New session 71 of user core. Feb 13 20:45:30.308440 systemd[1]: Started session-71.scope - Session 71 of User core. Feb 13 20:45:30.412165 sshd[3770]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:30.415120 systemd[1]: sshd@70-10.0.0.6:22-10.0.0.1:55178.service: Deactivated successfully. Feb 13 20:45:30.416951 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 20:45:30.417514 systemd-logind[1423]: Session 71 logged out. Waiting for processes to exit. Feb 13 20:45:30.418224 systemd-logind[1423]: Removed session 71. Feb 13 20:45:32.786940 kubelet[2429]: E0213 20:45:32.786901 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:35.425710 systemd[1]: Started sshd@71-10.0.0.6:22-10.0.0.1:60052.service - OpenSSH per-connection server daemon (10.0.0.1:60052). Feb 13 20:45:35.461538 sshd[3786]: Accepted publickey for core from 10.0.0.1 port 60052 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:35.462657 sshd[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:35.465802 systemd-logind[1423]: New session 72 of user core. Feb 13 20:45:35.476425 systemd[1]: Started session-72.scope - Session 72 of User core. Feb 13 20:45:35.578761 sshd[3786]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:35.581879 systemd[1]: sshd@71-10.0.0.6:22-10.0.0.1:60052.service: Deactivated successfully. Feb 13 20:45:35.583552 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 20:45:35.584181 systemd-logind[1423]: Session 72 logged out. Waiting for processes to exit. Feb 13 20:45:35.585014 systemd-logind[1423]: Removed session 72. Feb 13 20:45:37.788222 kubelet[2429]: E0213 20:45:37.788165 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:38.669971 kubelet[2429]: E0213 20:45:38.669899 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:38.671223 kubelet[2429]: E0213 20:45:38.671180 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:45:40.588849 systemd[1]: Started sshd@72-10.0.0.6:22-10.0.0.1:60062.service - OpenSSH per-connection server daemon (10.0.0.1:60062). Feb 13 20:45:40.624564 sshd[3802]: Accepted publickey for core from 10.0.0.1 port 60062 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:40.625785 sshd[3802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:40.629506 systemd-logind[1423]: New session 73 of user core. Feb 13 20:45:40.638445 systemd[1]: Started session-73.scope - Session 73 of User core. Feb 13 20:45:40.742227 sshd[3802]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:40.745423 systemd[1]: sshd@72-10.0.0.6:22-10.0.0.1:60062.service: Deactivated successfully. Feb 13 20:45:40.747180 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 20:45:40.748743 systemd-logind[1423]: Session 73 logged out. Waiting for processes to exit. Feb 13 20:45:40.749546 systemd-logind[1423]: Removed session 73. Feb 13 20:45:42.789421 kubelet[2429]: E0213 20:45:42.789335 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:45.752679 systemd[1]: Started sshd@73-10.0.0.6:22-10.0.0.1:53884.service - OpenSSH per-connection server daemon (10.0.0.1:53884). Feb 13 20:45:45.788218 sshd[3816]: Accepted publickey for core from 10.0.0.1 port 53884 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:45.789362 sshd[3816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:45.793063 systemd-logind[1423]: New session 74 of user core. Feb 13 20:45:45.806432 systemd[1]: Started session-74.scope - Session 74 of User core. Feb 13 20:45:45.912176 sshd[3816]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:45.915540 systemd[1]: sshd@73-10.0.0.6:22-10.0.0.1:53884.service: Deactivated successfully. Feb 13 20:45:45.917948 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 20:45:45.918867 systemd-logind[1423]: Session 74 logged out. Waiting for processes to exit. Feb 13 20:45:45.919820 systemd-logind[1423]: Removed session 74. Feb 13 20:45:47.790884 kubelet[2429]: E0213 20:45:47.790843 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:50.670157 kubelet[2429]: E0213 20:45:50.670102 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:50.671351 kubelet[2429]: E0213 20:45:50.671194 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:45:50.923802 systemd[1]: Started sshd@74-10.0.0.6:22-10.0.0.1:53894.service - OpenSSH per-connection server daemon (10.0.0.1:53894). Feb 13 20:45:50.959655 sshd[3833]: Accepted publickey for core from 10.0.0.1 port 53894 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:50.960771 sshd[3833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:50.964368 systemd-logind[1423]: New session 75 of user core. Feb 13 20:45:50.975440 systemd[1]: Started session-75.scope - Session 75 of User core. Feb 13 20:45:51.079160 sshd[3833]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:51.082542 systemd[1]: sshd@74-10.0.0.6:22-10.0.0.1:53894.service: Deactivated successfully. Feb 13 20:45:51.085195 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 20:45:51.086180 systemd-logind[1423]: Session 75 logged out. Waiting for processes to exit. Feb 13 20:45:51.087008 systemd-logind[1423]: Removed session 75. Feb 13 20:45:51.671105 kubelet[2429]: E0213 20:45:51.670770 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:52.792004 kubelet[2429]: E0213 20:45:52.791961 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:56.089807 systemd[1]: Started sshd@75-10.0.0.6:22-10.0.0.1:50492.service - OpenSSH per-connection server daemon (10.0.0.1:50492). Feb 13 20:45:56.125527 sshd[3848]: Accepted publickey for core from 10.0.0.1 port 50492 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:56.126744 sshd[3848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:56.131000 systemd-logind[1423]: New session 76 of user core. Feb 13 20:45:56.149433 systemd[1]: Started session-76.scope - Session 76 of User core. Feb 13 20:45:56.252429 sshd[3848]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:56.255002 systemd[1]: sshd@75-10.0.0.6:22-10.0.0.1:50492.service: Deactivated successfully. Feb 13 20:45:56.256715 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 20:45:56.258502 systemd-logind[1423]: Session 76 logged out. Waiting for processes to exit. Feb 13 20:45:56.259660 systemd-logind[1423]: Removed session 76. Feb 13 20:45:57.792995 kubelet[2429]: E0213 20:45:57.792937 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:01.266962 systemd[1]: Started sshd@76-10.0.0.6:22-10.0.0.1:50506.service - OpenSSH per-connection server daemon (10.0.0.1:50506). Feb 13 20:46:01.303475 sshd[3862]: Accepted publickey for core from 10.0.0.1 port 50506 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:01.304630 sshd[3862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:01.307921 systemd-logind[1423]: New session 77 of user core. Feb 13 20:46:01.318545 systemd[1]: Started session-77.scope - Session 77 of User core. Feb 13 20:46:01.422258 sshd[3862]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:01.425721 systemd[1]: sshd@76-10.0.0.6:22-10.0.0.1:50506.service: Deactivated successfully. Feb 13 20:46:01.427863 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 20:46:01.428500 systemd-logind[1423]: Session 77 logged out. Waiting for processes to exit. Feb 13 20:46:01.429559 systemd-logind[1423]: Removed session 77. Feb 13 20:46:02.670321 kubelet[2429]: E0213 20:46:02.670282 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:02.670984 kubelet[2429]: E0213 20:46:02.670941 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:46:02.794217 kubelet[2429]: E0213 20:46:02.794175 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:06.433005 systemd[1]: Started sshd@77-10.0.0.6:22-10.0.0.1:50824.service - OpenSSH per-connection server daemon (10.0.0.1:50824). Feb 13 20:46:06.469151 sshd[3877]: Accepted publickey for core from 10.0.0.1 port 50824 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:06.470285 sshd[3877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:06.473964 systemd-logind[1423]: New session 78 of user core. Feb 13 20:46:06.483519 systemd[1]: Started session-78.scope - Session 78 of User core. Feb 13 20:46:06.585960 sshd[3877]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:06.597883 systemd[1]: sshd@77-10.0.0.6:22-10.0.0.1:50824.service: Deactivated successfully. Feb 13 20:46:06.600111 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 20:46:06.602059 systemd-logind[1423]: Session 78 logged out. Waiting for processes to exit. Feb 13 20:46:06.603653 systemd[1]: Started sshd@78-10.0.0.6:22-10.0.0.1:50834.service - OpenSSH per-connection server daemon (10.0.0.1:50834). Feb 13 20:46:06.604550 systemd-logind[1423]: Removed session 78. Feb 13 20:46:06.639127 sshd[3891]: Accepted publickey for core from 10.0.0.1 port 50834 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:06.640259 sshd[3891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:06.643745 systemd-logind[1423]: New session 79 of user core. Feb 13 20:46:06.651448 systemd[1]: Started session-79.scope - Session 79 of User core. Feb 13 20:46:06.852163 sshd[3891]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:06.866763 systemd[1]: sshd@78-10.0.0.6:22-10.0.0.1:50834.service: Deactivated successfully. Feb 13 20:46:06.868220 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 20:46:06.869927 systemd-logind[1423]: Session 79 logged out. Waiting for processes to exit. Feb 13 20:46:06.877548 systemd[1]: Started sshd@79-10.0.0.6:22-10.0.0.1:50846.service - OpenSSH per-connection server daemon (10.0.0.1:50846). Feb 13 20:46:06.878621 systemd-logind[1423]: Removed session 79. Feb 13 20:46:06.909992 sshd[3905]: Accepted publickey for core from 10.0.0.1 port 50846 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:06.912000 sshd[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:06.916371 systemd-logind[1423]: New session 80 of user core. Feb 13 20:46:06.925455 systemd[1]: Started session-80.scope - Session 80 of User core. Feb 13 20:46:07.795148 kubelet[2429]: E0213 20:46:07.795108 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:08.076243 sshd[3905]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:08.084885 systemd[1]: sshd@79-10.0.0.6:22-10.0.0.1:50846.service: Deactivated successfully. Feb 13 20:46:08.086515 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 20:46:08.088226 systemd-logind[1423]: Session 80 logged out. Waiting for processes to exit. Feb 13 20:46:08.096623 systemd[1]: Started sshd@80-10.0.0.6:22-10.0.0.1:50852.service - OpenSSH per-connection server daemon (10.0.0.1:50852). Feb 13 20:46:08.098620 systemd-logind[1423]: Removed session 80. Feb 13 20:46:08.135096 sshd[3928]: Accepted publickey for core from 10.0.0.1 port 50852 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:08.137141 sshd[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:08.141844 systemd-logind[1423]: New session 81 of user core. Feb 13 20:46:08.157479 systemd[1]: Started session-81.scope - Session 81 of User core. Feb 13 20:46:08.373384 sshd[3928]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:08.383772 systemd[1]: sshd@80-10.0.0.6:22-10.0.0.1:50852.service: Deactivated successfully. Feb 13 20:46:08.388517 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 20:46:08.392563 systemd-logind[1423]: Session 81 logged out. Waiting for processes to exit. Feb 13 20:46:08.402672 systemd[1]: Started sshd@81-10.0.0.6:22-10.0.0.1:50860.service - OpenSSH per-connection server daemon (10.0.0.1:50860). Feb 13 20:46:08.404017 systemd-logind[1423]: Removed session 81. Feb 13 20:46:08.435168 sshd[3940]: Accepted publickey for core from 10.0.0.1 port 50860 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:08.436451 sshd[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:08.440367 systemd-logind[1423]: New session 82 of user core. Feb 13 20:46:08.446450 systemd[1]: Started session-82.scope - Session 82 of User core. Feb 13 20:46:08.551201 sshd[3940]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:08.554473 systemd[1]: sshd@81-10.0.0.6:22-10.0.0.1:50860.service: Deactivated successfully. Feb 13 20:46:08.556195 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 20:46:08.558276 systemd-logind[1423]: Session 82 logged out. Waiting for processes to exit. Feb 13 20:46:08.559237 systemd-logind[1423]: Removed session 82. Feb 13 20:46:10.670715 kubelet[2429]: E0213 20:46:10.670670 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:12.796181 kubelet[2429]: E0213 20:46:12.796132 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:13.562891 systemd[1]: Started sshd@82-10.0.0.6:22-10.0.0.1:47688.service - OpenSSH per-connection server daemon (10.0.0.1:47688). Feb 13 20:46:13.599959 sshd[3956]: Accepted publickey for core from 10.0.0.1 port 47688 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:13.601157 sshd[3956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:13.604632 systemd-logind[1423]: New session 83 of user core. Feb 13 20:46:13.614503 systemd[1]: Started session-83.scope - Session 83 of User core. Feb 13 20:46:13.716908 sshd[3956]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:13.720238 systemd[1]: sshd@82-10.0.0.6:22-10.0.0.1:47688.service: Deactivated successfully. Feb 13 20:46:13.722847 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 20:46:13.723568 systemd-logind[1423]: Session 83 logged out. Waiting for processes to exit. Feb 13 20:46:13.724408 systemd-logind[1423]: Removed session 83. Feb 13 20:46:14.670686 kubelet[2429]: E0213 20:46:14.670646 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:16.670727 kubelet[2429]: E0213 20:46:16.670684 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:16.671260 kubelet[2429]: E0213 20:46:16.671217 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:46:17.797176 kubelet[2429]: E0213 20:46:17.797133 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:18.727800 systemd[1]: Started sshd@83-10.0.0.6:22-10.0.0.1:47696.service - OpenSSH per-connection server daemon (10.0.0.1:47696). Feb 13 20:46:18.763398 sshd[3973]: Accepted publickey for core from 10.0.0.1 port 47696 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:18.764608 sshd[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:18.767939 systemd-logind[1423]: New session 84 of user core. Feb 13 20:46:18.777443 systemd[1]: Started session-84.scope - Session 84 of User core. Feb 13 20:46:18.881709 sshd[3973]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:18.884760 systemd[1]: sshd@83-10.0.0.6:22-10.0.0.1:47696.service: Deactivated successfully. Feb 13 20:46:18.886941 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 20:46:18.887558 systemd-logind[1423]: Session 84 logged out. Waiting for processes to exit. Feb 13 20:46:18.888361 systemd-logind[1423]: Removed session 84. Feb 13 20:46:22.798394 kubelet[2429]: E0213 20:46:22.798354 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:23.892860 systemd[1]: Started sshd@84-10.0.0.6:22-10.0.0.1:58996.service - OpenSSH per-connection server daemon (10.0.0.1:58996). Feb 13 20:46:23.929817 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 58996 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:23.931018 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:23.934471 systemd-logind[1423]: New session 85 of user core. Feb 13 20:46:23.946526 systemd[1]: Started session-85.scope - Session 85 of User core. Feb 13 20:46:24.053812 sshd[3987]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:24.057100 systemd[1]: sshd@84-10.0.0.6:22-10.0.0.1:58996.service: Deactivated successfully. Feb 13 20:46:24.058727 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 20:46:24.059323 systemd-logind[1423]: Session 85 logged out. Waiting for processes to exit. Feb 13 20:46:24.060178 systemd-logind[1423]: Removed session 85. Feb 13 20:46:27.670245 kubelet[2429]: E0213 20:46:27.670211 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:27.671606 kubelet[2429]: E0213 20:46:27.671488 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:46:27.799049 kubelet[2429]: E0213 20:46:27.798982 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:29.064394 systemd[1]: Started sshd@85-10.0.0.6:22-10.0.0.1:59010.service - OpenSSH per-connection server daemon (10.0.0.1:59010). Feb 13 20:46:29.100502 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 59010 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:29.101661 sshd[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:29.105464 systemd-logind[1423]: New session 86 of user core. Feb 13 20:46:29.118478 systemd[1]: Started session-86.scope - Session 86 of User core. Feb 13 20:46:29.222749 sshd[4001]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:29.225786 systemd[1]: sshd@85-10.0.0.6:22-10.0.0.1:59010.service: Deactivated successfully. Feb 13 20:46:29.228023 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 20:46:29.228645 systemd-logind[1423]: Session 86 logged out. Waiting for processes to exit. Feb 13 20:46:29.229373 systemd-logind[1423]: Removed session 86. Feb 13 20:46:32.800715 kubelet[2429]: E0213 20:46:32.800633 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:34.232959 systemd[1]: Started sshd@86-10.0.0.6:22-10.0.0.1:56762.service - OpenSSH per-connection server daemon (10.0.0.1:56762). Feb 13 20:46:34.268686 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 56762 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:34.269828 sshd[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:34.273562 systemd-logind[1423]: New session 87 of user core. Feb 13 20:46:34.280465 systemd[1]: Started session-87.scope - Session 87 of User core. Feb 13 20:46:34.386910 sshd[4015]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:34.390113 systemd[1]: sshd@86-10.0.0.6:22-10.0.0.1:56762.service: Deactivated successfully. Feb 13 20:46:34.391881 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 20:46:34.392508 systemd-logind[1423]: Session 87 logged out. Waiting for processes to exit. Feb 13 20:46:34.393434 systemd-logind[1423]: Removed session 87. Feb 13 20:46:37.801379 kubelet[2429]: E0213 20:46:37.801331 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:39.397893 systemd[1]: Started sshd@87-10.0.0.6:22-10.0.0.1:56778.service - OpenSSH per-connection server daemon (10.0.0.1:56778). Feb 13 20:46:39.433968 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 56778 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:39.435131 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:39.438920 systemd-logind[1423]: New session 88 of user core. Feb 13 20:46:39.451448 systemd[1]: Started session-88.scope - Session 88 of User core. Feb 13 20:46:39.556523 sshd[4031]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:39.559710 systemd[1]: sshd@87-10.0.0.6:22-10.0.0.1:56778.service: Deactivated successfully. Feb 13 20:46:39.561507 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 20:46:39.562087 systemd-logind[1423]: Session 88 logged out. Waiting for processes to exit. Feb 13 20:46:39.563135 systemd-logind[1423]: Removed session 88. Feb 13 20:46:41.670503 kubelet[2429]: E0213 20:46:41.670299 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:41.671057 kubelet[2429]: E0213 20:46:41.671007 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:46:42.802508 kubelet[2429]: E0213 20:46:42.802451 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:44.566814 systemd[1]: Started sshd@88-10.0.0.6:22-10.0.0.1:57334.service - OpenSSH per-connection server daemon (10.0.0.1:57334). Feb 13 20:46:44.603663 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 57334 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:44.604904 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:44.608502 systemd-logind[1423]: New session 89 of user core. Feb 13 20:46:44.619503 systemd[1]: Started session-89.scope - Session 89 of User core. Feb 13 20:46:44.669962 kubelet[2429]: E0213 20:46:44.669927 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:44.725533 sshd[4045]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:44.728779 systemd[1]: sshd@88-10.0.0.6:22-10.0.0.1:57334.service: Deactivated successfully. Feb 13 20:46:44.730476 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 20:46:44.731030 systemd-logind[1423]: Session 89 logged out. Waiting for processes to exit. Feb 13 20:46:44.731831 systemd-logind[1423]: Removed session 89. Feb 13 20:46:47.803827 kubelet[2429]: E0213 20:46:47.803780 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:49.735896 systemd[1]: Started sshd@89-10.0.0.6:22-10.0.0.1:57344.service - OpenSSH per-connection server daemon (10.0.0.1:57344). Feb 13 20:46:49.771681 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 57344 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:49.772858 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:49.776162 systemd-logind[1423]: New session 90 of user core. Feb 13 20:46:49.787439 systemd[1]: Started session-90.scope - Session 90 of User core. Feb 13 20:46:49.890382 sshd[4061]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:49.892799 systemd[1]: sshd@89-10.0.0.6:22-10.0.0.1:57344.service: Deactivated successfully. Feb 13 20:46:49.894494 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 20:46:49.895847 systemd-logind[1423]: Session 90 logged out. Waiting for processes to exit. Feb 13 20:46:49.896780 systemd-logind[1423]: Removed session 90. Feb 13 20:46:52.671018 kubelet[2429]: E0213 20:46:52.670952 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:52.671720 kubelet[2429]: E0213 20:46:52.671672 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:46:52.804977 kubelet[2429]: E0213 20:46:52.804941 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:54.901106 systemd[1]: Started sshd@90-10.0.0.6:22-10.0.0.1:58966.service - OpenSSH per-connection server daemon (10.0.0.1:58966). Feb 13 20:46:54.937207 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 58966 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:54.938427 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:54.941855 systemd-logind[1423]: New session 91 of user core. Feb 13 20:46:54.953514 systemd[1]: Started session-91.scope - Session 91 of User core. Feb 13 20:46:55.056115 sshd[4076]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:55.060092 systemd[1]: sshd@90-10.0.0.6:22-10.0.0.1:58966.service: Deactivated successfully. Feb 13 20:46:55.062045 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 20:46:55.062794 systemd-logind[1423]: Session 91 logged out. Waiting for processes to exit. Feb 13 20:46:55.063626 systemd-logind[1423]: Removed session 91. Feb 13 20:46:57.805945 kubelet[2429]: E0213 20:46:57.805895 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:00.067716 systemd[1]: Started sshd@91-10.0.0.6:22-10.0.0.1:58982.service - OpenSSH per-connection server daemon (10.0.0.1:58982). Feb 13 20:47:00.103718 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 58982 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:00.104880 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:00.108431 systemd-logind[1423]: New session 92 of user core. Feb 13 20:47:00.120438 systemd[1]: Started session-92.scope - Session 92 of User core. Feb 13 20:47:00.223508 sshd[4090]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:00.226632 systemd-logind[1423]: Session 92 logged out. Waiting for processes to exit. Feb 13 20:47:00.226954 systemd[1]: sshd@91-10.0.0.6:22-10.0.0.1:58982.service: Deactivated successfully. Feb 13 20:47:00.228625 systemd[1]: session-92.scope: Deactivated successfully. Feb 13 20:47:00.229751 systemd-logind[1423]: Removed session 92. Feb 13 20:47:02.807421 kubelet[2429]: E0213 20:47:02.807379 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:05.237147 systemd[1]: Started sshd@92-10.0.0.6:22-10.0.0.1:36470.service - OpenSSH per-connection server daemon (10.0.0.1:36470). Feb 13 20:47:05.272912 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 36470 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:05.274127 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:05.278009 systemd-logind[1423]: New session 93 of user core. Feb 13 20:47:05.284418 systemd[1]: Started session-93.scope - Session 93 of User core. Feb 13 20:47:05.387289 sshd[4104]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:05.389873 systemd[1]: sshd@92-10.0.0.6:22-10.0.0.1:36470.service: Deactivated successfully. Feb 13 20:47:05.391486 systemd[1]: session-93.scope: Deactivated successfully. Feb 13 20:47:05.392691 systemd-logind[1423]: Session 93 logged out. Waiting for processes to exit. Feb 13 20:47:05.393490 systemd-logind[1423]: Removed session 93. Feb 13 20:47:06.644458 update_engine[1425]: I20250213 20:47:06.644398 1425 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 20:47:06.644458 update_engine[1425]: I20250213 20:47:06.644448 1425 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 20:47:06.644816 update_engine[1425]: I20250213 20:47:06.644691 1425 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 20:47:06.645080 update_engine[1425]: I20250213 20:47:06.645044 1425 omaha_request_params.cc:62] Current group set to lts Feb 13 20:47:06.645170 update_engine[1425]: I20250213 20:47:06.645146 1425 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 20:47:06.645170 update_engine[1425]: I20250213 20:47:06.645160 1425 update_attempter.cc:643] Scheduling an action processor start. Feb 13 20:47:06.645214 update_engine[1425]: I20250213 20:47:06.645176 1425 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:47:06.645214 update_engine[1425]: I20250213 20:47:06.645203 1425 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 20:47:06.645262 update_engine[1425]: I20250213 20:47:06.645247 1425 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:47:06.645282 update_engine[1425]: I20250213 20:47:06.645259 1425 omaha_request_action.cc:272] Request: Feb 13 20:47:06.645282 update_engine[1425]: Feb 13 20:47:06.645282 update_engine[1425]: Feb 13 20:47:06.645282 update_engine[1425]: Feb 13 20:47:06.645282 update_engine[1425]: Feb 13 20:47:06.645282 update_engine[1425]: Feb 13 20:47:06.645282 update_engine[1425]: Feb 13 20:47:06.645282 update_engine[1425]: Feb 13 20:47:06.645282 update_engine[1425]: Feb 13 20:47:06.645282 update_engine[1425]: I20250213 20:47:06.645267 1425 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:47:06.645661 locksmithd[1453]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 20:47:06.646301 update_engine[1425]: I20250213 20:47:06.646255 1425 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:47:06.646566 update_engine[1425]: I20250213 20:47:06.646531 1425 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:47:06.657522 update_engine[1425]: E20250213 20:47:06.657486 1425 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:47:06.657575 update_engine[1425]: I20250213 20:47:06.657552 1425 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 20:47:06.670075 kubelet[2429]: E0213 20:47:06.670042 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:06.670849 kubelet[2429]: E0213 20:47:06.670791 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:47:07.808381 kubelet[2429]: E0213 20:47:07.808348 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:10.403129 systemd[1]: Started sshd@93-10.0.0.6:22-10.0.0.1:36486.service - OpenSSH per-connection server daemon (10.0.0.1:36486). Feb 13 20:47:10.439062 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 36486 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:10.440210 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:10.443557 systemd-logind[1423]: New session 94 of user core. Feb 13 20:47:10.454431 systemd[1]: Started session-94.scope - Session 94 of User core. Feb 13 20:47:10.557195 sshd[4120]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:10.560401 systemd[1]: sshd@93-10.0.0.6:22-10.0.0.1:36486.service: Deactivated successfully. Feb 13 20:47:10.562115 systemd[1]: session-94.scope: Deactivated successfully. Feb 13 20:47:10.562828 systemd-logind[1423]: Session 94 logged out. Waiting for processes to exit. Feb 13 20:47:10.563844 systemd-logind[1423]: Removed session 94. Feb 13 20:47:12.670343 kubelet[2429]: E0213 20:47:12.670262 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:12.809132 kubelet[2429]: E0213 20:47:12.809102 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:15.568407 systemd[1]: Started sshd@94-10.0.0.6:22-10.0.0.1:42758.service - OpenSSH per-connection server daemon (10.0.0.1:42758). Feb 13 20:47:15.604019 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 42758 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:15.605143 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:15.609028 systemd-logind[1423]: New session 95 of user core. Feb 13 20:47:15.619449 systemd[1]: Started session-95.scope - Session 95 of User core. Feb 13 20:47:15.725158 sshd[4134]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:15.728269 systemd[1]: sshd@94-10.0.0.6:22-10.0.0.1:42758.service: Deactivated successfully. Feb 13 20:47:15.730597 systemd[1]: session-95.scope: Deactivated successfully. Feb 13 20:47:15.731316 systemd-logind[1423]: Session 95 logged out. Waiting for processes to exit. Feb 13 20:47:15.732116 systemd-logind[1423]: Removed session 95. Feb 13 20:47:16.644459 update_engine[1425]: I20250213 20:47:16.644367 1425 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:47:16.644802 update_engine[1425]: I20250213 20:47:16.644658 1425 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:47:16.644845 update_engine[1425]: I20250213 20:47:16.644820 1425 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:47:16.649657 update_engine[1425]: E20250213 20:47:16.649616 1425 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:47:16.649725 update_engine[1425]: I20250213 20:47:16.649672 1425 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 20:47:17.809772 kubelet[2429]: E0213 20:47:17.809723 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:20.670816 kubelet[2429]: E0213 20:47:20.670770 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:20.735934 systemd[1]: Started sshd@95-10.0.0.6:22-10.0.0.1:42772.service - OpenSSH per-connection server daemon (10.0.0.1:42772). Feb 13 20:47:20.771586 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 42772 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:20.772827 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:20.776709 systemd-logind[1423]: New session 96 of user core. Feb 13 20:47:20.788462 systemd[1]: Started session-96.scope - Session 96 of User core. Feb 13 20:47:20.891699 sshd[4153]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:20.894851 systemd[1]: sshd@95-10.0.0.6:22-10.0.0.1:42772.service: Deactivated successfully. Feb 13 20:47:20.896560 systemd[1]: session-96.scope: Deactivated successfully. Feb 13 20:47:20.897126 systemd-logind[1423]: Session 96 logged out. Waiting for processes to exit. Feb 13 20:47:20.898256 systemd-logind[1423]: Removed session 96. Feb 13 20:47:21.671258 kubelet[2429]: E0213 20:47:21.670964 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:21.671795 kubelet[2429]: E0213 20:47:21.671747 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:47:22.811076 kubelet[2429]: E0213 20:47:22.811025 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:25.671409 kubelet[2429]: E0213 20:47:25.671011 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:25.901959 systemd[1]: Started sshd@96-10.0.0.6:22-10.0.0.1:53568.service - OpenSSH per-connection server daemon (10.0.0.1:53568). Feb 13 20:47:25.938350 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 53568 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:25.939564 sshd[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:25.943379 systemd-logind[1423]: New session 97 of user core. Feb 13 20:47:25.953443 systemd[1]: Started session-97.scope - Session 97 of User core. Feb 13 20:47:26.055471 sshd[4168]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:26.059288 systemd[1]: sshd@96-10.0.0.6:22-10.0.0.1:53568.service: Deactivated successfully. Feb 13 20:47:26.060942 systemd[1]: session-97.scope: Deactivated successfully. Feb 13 20:47:26.061596 systemd-logind[1423]: Session 97 logged out. Waiting for processes to exit. Feb 13 20:47:26.062373 systemd-logind[1423]: Removed session 97. Feb 13 20:47:26.645042 update_engine[1425]: I20250213 20:47:26.644721 1425 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:47:26.645042 update_engine[1425]: I20250213 20:47:26.644984 1425 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:47:26.645857 update_engine[1425]: I20250213 20:47:26.645709 1425 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:47:26.649478 update_engine[1425]: E20250213 20:47:26.649394 1425 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:47:26.649478 update_engine[1425]: I20250213 20:47:26.649454 1425 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 20:47:27.812596 kubelet[2429]: E0213 20:47:27.812554 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:31.065885 systemd[1]: Started sshd@97-10.0.0.6:22-10.0.0.1:53582.service - OpenSSH per-connection server daemon (10.0.0.1:53582). Feb 13 20:47:31.101887 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 53582 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:31.103072 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:31.106285 systemd-logind[1423]: New session 98 of user core. Feb 13 20:47:31.122526 systemd[1]: Started session-98.scope - Session 98 of User core. Feb 13 20:47:31.225749 sshd[4182]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:31.228978 systemd[1]: sshd@97-10.0.0.6:22-10.0.0.1:53582.service: Deactivated successfully. Feb 13 20:47:31.230632 systemd[1]: session-98.scope: Deactivated successfully. Feb 13 20:47:31.231235 systemd-logind[1423]: Session 98 logged out. Waiting for processes to exit. Feb 13 20:47:31.232258 systemd-logind[1423]: Removed session 98. Feb 13 20:47:32.813818 kubelet[2429]: E0213 20:47:32.813774 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:34.670046 kubelet[2429]: E0213 20:47:34.670010 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:34.670957 kubelet[2429]: E0213 20:47:34.670761 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:47:36.237002 systemd[1]: Started sshd@98-10.0.0.6:22-10.0.0.1:39446.service - OpenSSH per-connection server daemon (10.0.0.1:39446). Feb 13 20:47:36.273089 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 39446 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:36.274294 sshd[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:36.278188 systemd-logind[1423]: New session 99 of user core. Feb 13 20:47:36.285434 systemd[1]: Started session-99.scope - Session 99 of User core. Feb 13 20:47:36.388515 sshd[4197]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:36.391796 systemd[1]: sshd@98-10.0.0.6:22-10.0.0.1:39446.service: Deactivated successfully. Feb 13 20:47:36.393504 systemd[1]: session-99.scope: Deactivated successfully. Feb 13 20:47:36.394077 systemd-logind[1423]: Session 99 logged out. Waiting for processes to exit. Feb 13 20:47:36.395190 systemd-logind[1423]: Removed session 99. Feb 13 20:47:36.643762 update_engine[1425]: I20250213 20:47:36.643668 1425 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:47:36.644182 update_engine[1425]: I20250213 20:47:36.643939 1425 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:47:36.644182 update_engine[1425]: I20250213 20:47:36.644094 1425 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:47:36.648596 update_engine[1425]: E20250213 20:47:36.648557 1425 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:47:36.648644 update_engine[1425]: I20250213 20:47:36.648608 1425 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:47:36.648644 update_engine[1425]: I20250213 20:47:36.648617 1425 omaha_request_action.cc:617] Omaha request response: Feb 13 20:47:36.648731 update_engine[1425]: E20250213 20:47:36.648689 1425 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 20:47:36.648731 update_engine[1425]: I20250213 20:47:36.648706 1425 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 20:47:36.648731 update_engine[1425]: I20250213 20:47:36.648711 1425 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:47:36.648731 update_engine[1425]: I20250213 20:47:36.648715 1425 update_attempter.cc:306] Processing Done. Feb 13 20:47:36.648731 update_engine[1425]: E20250213 20:47:36.648728 1425 update_attempter.cc:619] Update failed. Feb 13 20:47:36.648819 update_engine[1425]: I20250213 20:47:36.648734 1425 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 20:47:36.648819 update_engine[1425]: I20250213 20:47:36.648738 1425 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 20:47:36.648819 update_engine[1425]: I20250213 20:47:36.648743 1425 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 20:47:36.648819 update_engine[1425]: I20250213 20:47:36.648806 1425 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:47:36.648899 update_engine[1425]: I20250213 20:47:36.648825 1425 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:47:36.648899 update_engine[1425]: I20250213 20:47:36.648830 1425 omaha_request_action.cc:272] Request: Feb 13 20:47:36.648899 update_engine[1425]: Feb 13 20:47:36.648899 update_engine[1425]: Feb 13 20:47:36.648899 update_engine[1425]: Feb 13 20:47:36.648899 update_engine[1425]: Feb 13 20:47:36.648899 update_engine[1425]: Feb 13 20:47:36.648899 update_engine[1425]: Feb 13 20:47:36.648899 update_engine[1425]: I20250213 20:47:36.648836 1425 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:47:36.649052 update_engine[1425]: I20250213 20:47:36.648975 1425 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:47:36.649218 update_engine[1425]: I20250213 20:47:36.649091 1425 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:47:36.649401 locksmithd[1453]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 20:47:36.652545 update_engine[1425]: E20250213 20:47:36.652508 1425 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:47:36.652585 update_engine[1425]: I20250213 20:47:36.652554 1425 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:47:36.652585 update_engine[1425]: I20250213 20:47:36.652561 1425 omaha_request_action.cc:617] Omaha request response: Feb 13 20:47:36.652585 update_engine[1425]: I20250213 20:47:36.652567 1425 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:47:36.652585 update_engine[1425]: I20250213 20:47:36.652571 1425 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:47:36.652585 update_engine[1425]: I20250213 20:47:36.652575 1425 update_attempter.cc:306] Processing Done. Feb 13 20:47:36.652585 update_engine[1425]: I20250213 20:47:36.652580 1425 update_attempter.cc:310] Error event sent. Feb 13 20:47:36.652710 update_engine[1425]: I20250213 20:47:36.652588 1425 update_check_scheduler.cc:74] Next update check in 46m12s Feb 13 20:47:36.652826 locksmithd[1453]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 20:47:37.815386 kubelet[2429]: E0213 20:47:37.815232 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:41.398808 systemd[1]: Started sshd@99-10.0.0.6:22-10.0.0.1:39458.service - OpenSSH per-connection server daemon (10.0.0.1:39458). Feb 13 20:47:41.435407 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 39458 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:41.436643 sshd[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:41.440018 systemd-logind[1423]: New session 100 of user core. Feb 13 20:47:41.449512 systemd[1]: Started session-100.scope - Session 100 of User core. Feb 13 20:47:41.552561 sshd[4213]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:41.555697 systemd[1]: sshd@99-10.0.0.6:22-10.0.0.1:39458.service: Deactivated successfully. Feb 13 20:47:41.557246 systemd[1]: session-100.scope: Deactivated successfully. Feb 13 20:47:41.558780 systemd-logind[1423]: Session 100 logged out. Waiting for processes to exit. Feb 13 20:47:41.559761 systemd-logind[1423]: Removed session 100. Feb 13 20:47:42.816604 kubelet[2429]: E0213 20:47:42.816552 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:46.566972 systemd[1]: Started sshd@100-10.0.0.6:22-10.0.0.1:49646.service - OpenSSH per-connection server daemon (10.0.0.1:49646). Feb 13 20:47:46.602761 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 49646 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:46.603878 sshd[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:46.607948 systemd-logind[1423]: New session 101 of user core. Feb 13 20:47:46.613449 systemd[1]: Started session-101.scope - Session 101 of User core. Feb 13 20:47:46.670360 kubelet[2429]: E0213 20:47:46.670322 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:46.671187 kubelet[2429]: E0213 20:47:46.671131 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:47:46.715531 sshd[4230]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:46.718745 systemd[1]: sshd@100-10.0.0.6:22-10.0.0.1:49646.service: Deactivated successfully. Feb 13 20:47:46.720943 systemd[1]: session-101.scope: Deactivated successfully. Feb 13 20:47:46.721520 systemd-logind[1423]: Session 101 logged out. Waiting for processes to exit. Feb 13 20:47:46.722286 systemd-logind[1423]: Removed session 101. Feb 13 20:47:47.817906 kubelet[2429]: E0213 20:47:47.817852 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:51.725801 systemd[1]: Started sshd@101-10.0.0.6:22-10.0.0.1:49656.service - OpenSSH per-connection server daemon (10.0.0.1:49656). Feb 13 20:47:51.761629 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 49656 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:51.762748 sshd[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:51.766363 systemd-logind[1423]: New session 102 of user core. Feb 13 20:47:51.772427 systemd[1]: Started session-102.scope - Session 102 of User core. Feb 13 20:47:51.874755 sshd[4244]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:51.878059 systemd[1]: sshd@101-10.0.0.6:22-10.0.0.1:49656.service: Deactivated successfully. Feb 13 20:47:51.879628 systemd[1]: session-102.scope: Deactivated successfully. Feb 13 20:47:51.880208 systemd-logind[1423]: Session 102 logged out. Waiting for processes to exit. Feb 13 20:47:51.881063 systemd-logind[1423]: Removed session 102. Feb 13 20:47:52.819586 kubelet[2429]: E0213 20:47:52.819525 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:56.885836 systemd[1]: Started sshd@102-10.0.0.6:22-10.0.0.1:34204.service - OpenSSH per-connection server daemon (10.0.0.1:34204). Feb 13 20:47:56.921810 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 34204 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:56.922925 sshd[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:56.926434 systemd-logind[1423]: New session 103 of user core. Feb 13 20:47:56.932440 systemd[1]: Started session-103.scope - Session 103 of User core. Feb 13 20:47:57.032721 sshd[4258]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:57.035785 systemd[1]: sshd@102-10.0.0.6:22-10.0.0.1:34204.service: Deactivated successfully. Feb 13 20:47:57.037372 systemd[1]: session-103.scope: Deactivated successfully. Feb 13 20:47:57.038862 systemd-logind[1423]: Session 103 logged out. Waiting for processes to exit. Feb 13 20:47:57.040050 systemd-logind[1423]: Removed session 103. Feb 13 20:47:57.671070 kubelet[2429]: E0213 20:47:57.670861 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:57.671492 kubelet[2429]: E0213 20:47:57.671448 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:47:57.820133 kubelet[2429]: E0213 20:47:57.820103 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:02.049953 systemd[1]: Started sshd@103-10.0.0.6:22-10.0.0.1:34220.service - OpenSSH per-connection server daemon (10.0.0.1:34220). Feb 13 20:48:02.085868 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 34220 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:02.087104 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:02.090849 systemd-logind[1423]: New session 104 of user core. Feb 13 20:48:02.101441 systemd[1]: Started session-104.scope - Session 104 of User core. Feb 13 20:48:02.202065 sshd[4273]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:02.205063 systemd[1]: sshd@103-10.0.0.6:22-10.0.0.1:34220.service: Deactivated successfully. Feb 13 20:48:02.206687 systemd[1]: session-104.scope: Deactivated successfully. Feb 13 20:48:02.208551 systemd-logind[1423]: Session 104 logged out. Waiting for processes to exit. Feb 13 20:48:02.209421 systemd-logind[1423]: Removed session 104. Feb 13 20:48:02.821709 kubelet[2429]: E0213 20:48:02.821665 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:07.213048 systemd[1]: Started sshd@104-10.0.0.6:22-10.0.0.1:35298.service - OpenSSH per-connection server daemon (10.0.0.1:35298). Feb 13 20:48:07.248673 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 35298 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:07.249779 sshd[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:07.253058 systemd-logind[1423]: New session 105 of user core. Feb 13 20:48:07.264494 systemd[1]: Started session-105.scope - Session 105 of User core. Feb 13 20:48:07.366246 sshd[4287]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:07.369631 systemd[1]: sshd@104-10.0.0.6:22-10.0.0.1:35298.service: Deactivated successfully. Feb 13 20:48:07.371301 systemd[1]: session-105.scope: Deactivated successfully. Feb 13 20:48:07.371865 systemd-logind[1423]: Session 105 logged out. Waiting for processes to exit. Feb 13 20:48:07.372765 systemd-logind[1423]: Removed session 105. Feb 13 20:48:07.671998 kubelet[2429]: E0213 20:48:07.671966 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:07.823116 kubelet[2429]: E0213 20:48:07.823069 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:10.670190 kubelet[2429]: E0213 20:48:10.670149 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:10.671030 kubelet[2429]: E0213 20:48:10.670863 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:48:12.376857 systemd[1]: Started sshd@105-10.0.0.6:22-10.0.0.1:35304.service - OpenSSH per-connection server daemon (10.0.0.1:35304). Feb 13 20:48:12.412625 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 35304 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:12.413777 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:12.417444 systemd-logind[1423]: New session 106 of user core. Feb 13 20:48:12.424435 systemd[1]: Started session-106.scope - Session 106 of User core. Feb 13 20:48:12.525384 sshd[4302]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:12.528748 systemd[1]: sshd@105-10.0.0.6:22-10.0.0.1:35304.service: Deactivated successfully. Feb 13 20:48:12.530919 systemd[1]: session-106.scope: Deactivated successfully. Feb 13 20:48:12.531718 systemd-logind[1423]: Session 106 logged out. Waiting for processes to exit. Feb 13 20:48:12.532495 systemd-logind[1423]: Removed session 106. Feb 13 20:48:12.823866 kubelet[2429]: E0213 20:48:12.823786 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:17.535893 systemd[1]: Started sshd@106-10.0.0.6:22-10.0.0.1:34896.service - OpenSSH per-connection server daemon (10.0.0.1:34896). Feb 13 20:48:17.571761 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 34896 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:17.572981 sshd[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:17.576703 systemd-logind[1423]: New session 107 of user core. Feb 13 20:48:17.583458 systemd[1]: Started session-107.scope - Session 107 of User core. Feb 13 20:48:17.686642 sshd[4318]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:17.689662 systemd[1]: sshd@106-10.0.0.6:22-10.0.0.1:34896.service: Deactivated successfully. Feb 13 20:48:17.692290 systemd[1]: session-107.scope: Deactivated successfully. Feb 13 20:48:17.693366 systemd-logind[1423]: Session 107 logged out. Waiting for processes to exit. Feb 13 20:48:17.695413 systemd-logind[1423]: Removed session 107. Feb 13 20:48:17.824908 kubelet[2429]: E0213 20:48:17.824795 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:22.697214 systemd[1]: Started sshd@107-10.0.0.6:22-10.0.0.1:53974.service - OpenSSH per-connection server daemon (10.0.0.1:53974). Feb 13 20:48:22.734027 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 53974 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:22.735286 sshd[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:22.739733 systemd-logind[1423]: New session 108 of user core. Feb 13 20:48:22.749449 systemd[1]: Started session-108.scope - Session 108 of User core. Feb 13 20:48:22.825912 kubelet[2429]: E0213 20:48:22.825873 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:22.852335 sshd[4334]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:22.855644 systemd[1]: sshd@107-10.0.0.6:22-10.0.0.1:53974.service: Deactivated successfully. Feb 13 20:48:22.857196 systemd[1]: session-108.scope: Deactivated successfully. Feb 13 20:48:22.857820 systemd-logind[1423]: Session 108 logged out. Waiting for processes to exit. Feb 13 20:48:22.858825 systemd-logind[1423]: Removed session 108. Feb 13 20:48:24.670514 kubelet[2429]: E0213 20:48:24.670462 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:24.671085 kubelet[2429]: E0213 20:48:24.671043 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:48:27.670102 kubelet[2429]: E0213 20:48:27.670048 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:27.827139 kubelet[2429]: E0213 20:48:27.827101 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:27.861726 systemd[1]: Started sshd@108-10.0.0.6:22-10.0.0.1:53976.service - OpenSSH per-connection server daemon (10.0.0.1:53976). Feb 13 20:48:27.897532 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 53976 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:27.898742 sshd[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:27.902268 systemd-logind[1423]: New session 109 of user core. Feb 13 20:48:27.912537 systemd[1]: Started session-109.scope - Session 109 of User core. Feb 13 20:48:28.014429 sshd[4348]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:28.016884 systemd[1]: sshd@108-10.0.0.6:22-10.0.0.1:53976.service: Deactivated successfully. Feb 13 20:48:28.019270 systemd-logind[1423]: Session 109 logged out. Waiting for processes to exit. Feb 13 20:48:28.019330 systemd[1]: session-109.scope: Deactivated successfully. Feb 13 20:48:28.020270 systemd-logind[1423]: Removed session 109. Feb 13 20:48:32.828664 kubelet[2429]: E0213 20:48:32.828601 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:33.024767 systemd[1]: Started sshd@109-10.0.0.6:22-10.0.0.1:56856.service - OpenSSH per-connection server daemon (10.0.0.1:56856). Feb 13 20:48:33.060623 sshd[4363]: Accepted publickey for core from 10.0.0.1 port 56856 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:33.061746 sshd[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:33.065058 systemd-logind[1423]: New session 110 of user core. Feb 13 20:48:33.073522 systemd[1]: Started session-110.scope - Session 110 of User core. Feb 13 20:48:33.175774 sshd[4363]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:33.179408 systemd[1]: sshd@109-10.0.0.6:22-10.0.0.1:56856.service: Deactivated successfully. Feb 13 20:48:33.180980 systemd[1]: session-110.scope: Deactivated successfully. Feb 13 20:48:33.182686 systemd-logind[1423]: Session 110 logged out. Waiting for processes to exit. Feb 13 20:48:33.183662 systemd-logind[1423]: Removed session 110. Feb 13 20:48:34.670026 kubelet[2429]: E0213 20:48:34.669984 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:37.671570 kubelet[2429]: E0213 20:48:37.671073 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:37.672088 kubelet[2429]: E0213 20:48:37.672052 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:48:37.829913 kubelet[2429]: E0213 20:48:37.829857 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:38.186862 systemd[1]: Started sshd@110-10.0.0.6:22-10.0.0.1:56872.service - OpenSSH per-connection server daemon (10.0.0.1:56872). Feb 13 20:48:38.222757 sshd[4379]: Accepted publickey for core from 10.0.0.1 port 56872 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:38.223915 sshd[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:38.227179 systemd-logind[1423]: New session 111 of user core. Feb 13 20:48:38.236484 systemd[1]: Started session-111.scope - Session 111 of User core. Feb 13 20:48:38.338540 sshd[4379]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:38.341597 systemd[1]: sshd@110-10.0.0.6:22-10.0.0.1:56872.service: Deactivated successfully. Feb 13 20:48:38.343215 systemd[1]: session-111.scope: Deactivated successfully. Feb 13 20:48:38.345302 systemd-logind[1423]: Session 111 logged out. Waiting for processes to exit. Feb 13 20:48:38.346258 systemd-logind[1423]: Removed session 111. Feb 13 20:48:42.670496 kubelet[2429]: E0213 20:48:42.670377 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:42.830805 kubelet[2429]: E0213 20:48:42.830760 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:43.349908 systemd[1]: Started sshd@111-10.0.0.6:22-10.0.0.1:34168.service - OpenSSH per-connection server daemon (10.0.0.1:34168). Feb 13 20:48:43.386289 sshd[4394]: Accepted publickey for core from 10.0.0.1 port 34168 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:43.387522 sshd[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:43.391362 systemd-logind[1423]: New session 112 of user core. Feb 13 20:48:43.405473 systemd[1]: Started session-112.scope - Session 112 of User core. Feb 13 20:48:43.507863 sshd[4394]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:43.510988 systemd[1]: sshd@111-10.0.0.6:22-10.0.0.1:34168.service: Deactivated successfully. Feb 13 20:48:43.513139 systemd[1]: session-112.scope: Deactivated successfully. Feb 13 20:48:43.514072 systemd-logind[1423]: Session 112 logged out. Waiting for processes to exit. Feb 13 20:48:43.514982 systemd-logind[1423]: Removed session 112. Feb 13 20:48:47.831864 kubelet[2429]: E0213 20:48:47.831826 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:48.518844 systemd[1]: Started sshd@112-10.0.0.6:22-10.0.0.1:34172.service - OpenSSH per-connection server daemon (10.0.0.1:34172). Feb 13 20:48:48.554727 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 34172 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:48.555938 sshd[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:48.559895 systemd-logind[1423]: New session 113 of user core. Feb 13 20:48:48.570492 systemd[1]: Started session-113.scope - Session 113 of User core. Feb 13 20:48:48.671137 kubelet[2429]: E0213 20:48:48.671098 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:48.671830 kubelet[2429]: E0213 20:48:48.671789 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:48:48.674521 sshd[4411]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:48.677695 systemd[1]: sshd@112-10.0.0.6:22-10.0.0.1:34172.service: Deactivated successfully. Feb 13 20:48:48.679368 systemd[1]: session-113.scope: Deactivated successfully. Feb 13 20:48:48.679929 systemd-logind[1423]: Session 113 logged out. Waiting for processes to exit. Feb 13 20:48:48.680809 systemd-logind[1423]: Removed session 113. Feb 13 20:48:52.833082 kubelet[2429]: E0213 20:48:52.832978 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:53.688755 systemd[1]: Started sshd@113-10.0.0.6:22-10.0.0.1:55170.service - OpenSSH per-connection server daemon (10.0.0.1:55170). Feb 13 20:48:53.724812 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 55170 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:53.725946 sshd[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:53.729368 systemd-logind[1423]: New session 114 of user core. Feb 13 20:48:53.744437 systemd[1]: Started session-114.scope - Session 114 of User core. Feb 13 20:48:53.845005 sshd[4425]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:53.848536 systemd[1]: sshd@113-10.0.0.6:22-10.0.0.1:55170.service: Deactivated successfully. Feb 13 20:48:53.850174 systemd[1]: session-114.scope: Deactivated successfully. Feb 13 20:48:53.850788 systemd-logind[1423]: Session 114 logged out. Waiting for processes to exit. Feb 13 20:48:53.851657 systemd-logind[1423]: Removed session 114. Feb 13 20:48:57.834428 kubelet[2429]: E0213 20:48:57.834381 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:58.855872 systemd[1]: Started sshd@114-10.0.0.6:22-10.0.0.1:55184.service - OpenSSH per-connection server daemon (10.0.0.1:55184). Feb 13 20:48:58.891879 sshd[4439]: Accepted publickey for core from 10.0.0.1 port 55184 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:58.893090 sshd[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:58.896342 systemd-logind[1423]: New session 115 of user core. Feb 13 20:48:58.908494 systemd[1]: Started session-115.scope - Session 115 of User core. Feb 13 20:48:59.012501 sshd[4439]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:59.015568 systemd[1]: sshd@114-10.0.0.6:22-10.0.0.1:55184.service: Deactivated successfully. Feb 13 20:48:59.017148 systemd[1]: session-115.scope: Deactivated successfully. Feb 13 20:48:59.018363 systemd-logind[1423]: Session 115 logged out. Waiting for processes to exit. Feb 13 20:48:59.019270 systemd-logind[1423]: Removed session 115. Feb 13 20:49:02.835917 kubelet[2429]: E0213 20:49:02.835872 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:49:03.670887 kubelet[2429]: E0213 20:49:03.670705 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:49:03.673658 kubelet[2429]: E0213 20:49:03.673561 2429 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-2bqj5" podUID="40748397-e6cc-4e80-aa93-47714f7f3a4c" Feb 13 20:49:04.021878 systemd[1]: Started sshd@115-10.0.0.6:22-10.0.0.1:38558.service - OpenSSH per-connection server daemon (10.0.0.1:38558). Feb 13 20:49:04.057623 sshd[4454]: Accepted publickey for core from 10.0.0.1 port 38558 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:49:04.058797 sshd[4454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:49:04.062378 systemd-logind[1423]: New session 116 of user core. Feb 13 20:49:04.068455 systemd[1]: Started session-116.scope - Session 116 of User core. Feb 13 20:49:04.168553 sshd[4454]: pam_unix(sshd:session): session closed for user core Feb 13 20:49:04.171843 systemd[1]: sshd@115-10.0.0.6:22-10.0.0.1:38558.service: Deactivated successfully. Feb 13 20:49:04.173450 systemd[1]: session-116.scope: Deactivated successfully. Feb 13 20:49:04.173958 systemd-logind[1423]: Session 116 logged out. Waiting for processes to exit. Feb 13 20:49:04.174787 systemd-logind[1423]: Removed session 116. Feb 13 20:49:07.837462 kubelet[2429]: E0213 20:49:07.837406 2429 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:49:09.178924 systemd[1]: Started sshd@116-10.0.0.6:22-10.0.0.1:38562.service - OpenSSH per-connection server daemon (10.0.0.1:38562). Feb 13 20:49:09.215926 sshd[4468]: Accepted publickey for core from 10.0.0.1 port 38562 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:49:09.217063 sshd[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:49:09.221029 systemd-logind[1423]: New session 117 of user core. Feb 13 20:49:09.235440 systemd[1]: Started session-117.scope - Session 117 of User core. Feb 13 20:49:09.337725 sshd[4468]: pam_unix(sshd:session): session closed for user core Feb 13 20:49:09.340779 systemd[1]: sshd@116-10.0.0.6:22-10.0.0.1:38562.service: Deactivated successfully. Feb 13 20:49:09.342909 systemd[1]: session-117.scope: Deactivated successfully. Feb 13 20:49:09.343542 systemd-logind[1423]: Session 117 logged out. Waiting for processes to exit. Feb 13 20:49:09.344380 systemd-logind[1423]: Removed session 117.