Feb 13 20:38:57.946066 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:38:57.946088 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 20:38:57.946099 kernel: KASLR enabled Feb 13 20:38:57.946105 kernel: efi: EFI v2.7 by EDK II Feb 13 20:38:57.946111 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Feb 13 20:38:57.946117 kernel: random: crng init done Feb 13 20:38:57.946124 kernel: ACPI: Early table checksum verification disabled Feb 13 20:38:57.946130 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Feb 13 20:38:57.946136 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 20:38:57.946144 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.946150 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.946157 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.946163 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.946169 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.946176 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.946184 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.946191 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.946198 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:38:57.946204 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 20:38:57.946210 kernel: NUMA: Failed to initialise from firmware Feb 13 20:38:57.946217 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:38:57.946224 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 20:38:57.946230 kernel: Zone ranges: Feb 13 20:38:57.946237 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:38:57.946243 kernel: DMA32 empty Feb 13 20:38:57.946251 kernel: Normal empty Feb 13 20:38:57.946257 kernel: Movable zone start for each node Feb 13 20:38:57.946264 kernel: Early memory node ranges Feb 13 20:38:57.946271 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 20:38:57.946277 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 20:38:57.946283 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 20:38:57.946290 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 20:38:57.946297 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 20:38:57.946304 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 20:38:57.946310 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 20:38:57.946316 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 20:38:57.946323 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 20:38:57.946331 kernel: psci: probing for conduit method from ACPI. Feb 13 20:38:57.946337 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:38:57.946344 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:38:57.946354 kernel: psci: Trusted OS migration not required Feb 13 20:38:57.946361 kernel: psci: SMC Calling Convention v1.1 Feb 13 20:38:57.946368 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 20:38:57.946376 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:38:57.946383 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:38:57.946390 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 20:38:57.946397 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:38:57.946404 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:38:57.946411 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:38:57.946418 kernel: CPU features: detected: Spectre-v4 Feb 13 20:38:57.946425 kernel: CPU features: detected: Spectre-BHB Feb 13 20:38:57.946432 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:38:57.946439 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:38:57.946447 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:38:57.946454 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:38:57.946461 kernel: alternatives: applying boot alternatives Feb 13 20:38:57.946469 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:38:57.946477 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:38:57.946484 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:38:57.946491 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:38:57.946498 kernel: Fallback order for Node 0: 0 Feb 13 20:38:57.946505 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 20:38:57.946512 kernel: Policy zone: DMA Feb 13 20:38:57.946519 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:38:57.946527 kernel: software IO TLB: area num 4. Feb 13 20:38:57.946534 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 20:38:57.946541 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Feb 13 20:38:57.946548 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 20:38:57.946555 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:38:57.946562 kernel: rcu: RCU event tracing is enabled. Feb 13 20:38:57.946570 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 20:38:57.946577 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:38:57.946584 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:38:57.946591 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:38:57.946598 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 20:38:57.946605 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:38:57.946613 kernel: GICv3: 256 SPIs implemented Feb 13 20:38:57.946620 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:38:57.946627 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:38:57.946634 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:38:57.946641 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 20:38:57.946648 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 20:38:57.946654 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 20:38:57.946661 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 20:38:57.946668 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 20:38:57.946675 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 20:38:57.946686 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:38:57.946697 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:38:57.946704 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:38:57.946711 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:38:57.946718 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:38:57.946725 kernel: arm-pv: using stolen time PV Feb 13 20:38:57.946733 kernel: Console: colour dummy device 80x25 Feb 13 20:38:57.946740 kernel: ACPI: Core revision 20230628 Feb 13 20:38:57.946747 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:38:57.946754 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:38:57.946761 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:38:57.946770 kernel: landlock: Up and running. Feb 13 20:38:57.946776 kernel: SELinux: Initializing. Feb 13 20:38:57.946783 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:38:57.946791 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:38:57.946798 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:38:57.946805 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:38:57.946812 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:38:57.946820 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:38:57.946827 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 20:38:57.946835 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 20:38:57.946842 kernel: Remapping and enabling EFI services. Feb 13 20:38:57.946849 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:38:57.946856 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:38:57.946863 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 20:38:57.946871 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 20:38:57.946878 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:38:57.946885 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:38:57.946892 kernel: Detected PIPT I-cache on CPU2 Feb 13 20:38:57.946899 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 20:38:57.946907 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 20:38:57.946915 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:38:57.946926 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 20:38:57.946935 kernel: Detected PIPT I-cache on CPU3 Feb 13 20:38:57.946943 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 20:38:57.946950 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 20:38:57.946958 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:38:57.946965 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 20:38:57.946972 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 20:38:57.946981 kernel: SMP: Total of 4 processors activated. Feb 13 20:38:57.946988 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:38:57.946996 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:38:57.947003 kernel: CPU features: detected: Common not Private translations Feb 13 20:38:57.947011 kernel: CPU features: detected: CRC32 instructions Feb 13 20:38:57.947018 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 20:38:57.947026 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:38:57.947034 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:38:57.947050 kernel: CPU features: detected: Privileged Access Never Feb 13 20:38:57.947058 kernel: CPU features: detected: RAS Extension Support Feb 13 20:38:57.947066 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 20:38:57.947073 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:38:57.947081 kernel: alternatives: applying system-wide alternatives Feb 13 20:38:57.947088 kernel: devtmpfs: initialized Feb 13 20:38:57.947096 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:38:57.947104 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 20:38:57.947111 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:38:57.947121 kernel: SMBIOS 3.0.0 present. Feb 13 20:38:57.947128 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Feb 13 20:38:57.947136 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:38:57.947144 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:38:57.947151 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:38:57.947159 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:38:57.947166 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:38:57.947174 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Feb 13 20:38:57.947181 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:38:57.947190 kernel: cpuidle: using governor menu Feb 13 20:38:57.947198 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:38:57.947205 kernel: ASID allocator initialised with 32768 entries Feb 13 20:38:57.947212 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:38:57.947220 kernel: Serial: AMBA PL011 UART driver Feb 13 20:38:57.947227 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:38:57.947234 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:38:57.947242 kernel: Modules: 509040 pages in range for PLT usage Feb 13 20:38:57.947249 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:38:57.947258 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:38:57.947266 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:38:57.947273 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:38:57.947281 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:38:57.947288 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:38:57.947296 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:38:57.947304 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:38:57.947312 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:38:57.947319 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:38:57.947328 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:38:57.947336 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:38:57.947343 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:38:57.947350 kernel: ACPI: Interpreter enabled Feb 13 20:38:57.947358 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:38:57.947365 kernel: ACPI: MCFG table detected, 1 entries Feb 13 20:38:57.947373 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:38:57.947380 kernel: printk: console [ttyAMA0] enabled Feb 13 20:38:57.947387 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:38:57.947519 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:38:57.947597 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 20:38:57.947665 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 20:38:57.947742 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 20:38:57.947810 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 20:38:57.947820 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 20:38:57.947828 kernel: PCI host bridge to bus 0000:00 Feb 13 20:38:57.947905 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 20:38:57.947967 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 20:38:57.948027 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 20:38:57.948189 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:38:57.948277 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 20:38:57.948362 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:38:57.948438 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 20:38:57.948508 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 20:38:57.948592 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:38:57.948663 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:38:57.948740 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 20:38:57.948823 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 20:38:57.948884 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 20:38:57.948944 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 20:38:57.949008 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 20:38:57.949018 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 20:38:57.949026 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 20:38:57.949034 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 20:38:57.949051 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 20:38:57.949060 kernel: iommu: Default domain type: Translated Feb 13 20:38:57.949072 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:38:57.949080 kernel: efivars: Registered efivars operations Feb 13 20:38:57.949091 kernel: vgaarb: loaded Feb 13 20:38:57.949098 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:38:57.949105 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:38:57.949113 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:38:57.949120 kernel: pnp: PnP ACPI init Feb 13 20:38:57.949202 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 20:38:57.949214 kernel: pnp: PnP ACPI: found 1 devices Feb 13 20:38:57.949222 kernel: NET: Registered PF_INET protocol family Feb 13 20:38:57.949233 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:38:57.949241 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:38:57.949248 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:38:57.949256 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:38:57.949276 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:38:57.949284 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:38:57.949291 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:38:57.949300 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:38:57.949308 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:38:57.949317 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:38:57.949324 kernel: kvm [1]: HYP mode not available Feb 13 20:38:57.949332 kernel: Initialise system trusted keyrings Feb 13 20:38:57.949339 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:38:57.949347 kernel: Key type asymmetric registered Feb 13 20:38:57.949355 kernel: Asymmetric key parser 'x509' registered Feb 13 20:38:57.949362 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:38:57.949370 kernel: io scheduler mq-deadline registered Feb 13 20:38:57.949378 kernel: io scheduler kyber registered Feb 13 20:38:57.949387 kernel: io scheduler bfq registered Feb 13 20:38:57.949395 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 20:38:57.949402 kernel: ACPI: button: Power Button [PWRB] Feb 13 20:38:57.949410 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 20:38:57.949482 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 20:38:57.949498 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:38:57.949506 kernel: thunder_xcv, ver 1.0 Feb 13 20:38:57.949513 kernel: thunder_bgx, ver 1.0 Feb 13 20:38:57.949520 kernel: nicpf, ver 1.0 Feb 13 20:38:57.949530 kernel: nicvf, ver 1.0 Feb 13 20:38:57.949607 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:38:57.949675 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:38:57 UTC (1739479137) Feb 13 20:38:57.949691 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:38:57.949699 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 20:38:57.949711 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:38:57.949722 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:38:57.949731 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:38:57.949742 kernel: Segment Routing with IPv6 Feb 13 20:38:57.949750 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:38:57.949757 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:38:57.949768 kernel: Key type dns_resolver registered Feb 13 20:38:57.949775 kernel: registered taskstats version 1 Feb 13 20:38:57.949783 kernel: Loading compiled-in X.509 certificates Feb 13 20:38:57.949793 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 20:38:57.949802 kernel: Key type .fscrypt registered Feb 13 20:38:57.949811 kernel: Key type fscrypt-provisioning registered Feb 13 20:38:57.949821 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:38:57.949829 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:38:57.949836 kernel: ima: No architecture policies found Feb 13 20:38:57.949844 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:38:57.949852 kernel: clk: Disabling unused clocks Feb 13 20:38:57.949859 kernel: Freeing unused kernel memory: 39360K Feb 13 20:38:57.949869 kernel: Run /init as init process Feb 13 20:38:57.949876 kernel: with arguments: Feb 13 20:38:57.949884 kernel: /init Feb 13 20:38:57.949894 kernel: with environment: Feb 13 20:38:57.949901 kernel: HOME=/ Feb 13 20:38:57.949912 kernel: TERM=linux Feb 13 20:38:57.949924 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:38:57.949934 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:38:57.949943 systemd[1]: Detected virtualization kvm. Feb 13 20:38:57.949951 systemd[1]: Detected architecture arm64. Feb 13 20:38:57.949959 systemd[1]: Running in initrd. Feb 13 20:38:57.949969 systemd[1]: No hostname configured, using default hostname. Feb 13 20:38:57.949977 systemd[1]: Hostname set to <localhost>. Feb 13 20:38:57.949985 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:38:57.949993 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:38:57.950001 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:38:57.950009 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:38:57.950017 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:38:57.950026 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:38:57.950035 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:38:57.950051 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:38:57.950061 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:38:57.950070 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:38:57.950078 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:38:57.950086 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:38:57.950097 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:38:57.950110 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:38:57.950118 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:38:57.950126 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:38:57.950134 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:38:57.950142 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:38:57.950150 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:38:57.950158 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:38:57.950166 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:38:57.950176 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:38:57.950184 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:38:57.950192 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:38:57.950200 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:38:57.950208 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:38:57.950216 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:38:57.950225 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:38:57.950233 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:38:57.950241 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:38:57.950251 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:38:57.950259 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:38:57.950267 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:38:57.950275 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:38:57.950283 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:38:57.950293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:38:57.950301 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:38:57.950310 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:38:57.950337 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 20:38:57.950358 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:38:57.950367 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:38:57.950376 systemd-journald[239]: Journal started Feb 13 20:38:57.950394 systemd-journald[239]: Runtime Journal (/run/log/journal/f498f856d3f74f629d53ba2727f625c2) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:38:57.933150 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 20:38:57.955689 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:38:57.955713 kernel: Bridge firewalling registered Feb 13 20:38:57.954966 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 20:38:57.957021 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:38:57.958991 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:38:57.963082 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:38:57.964719 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:38:57.968337 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:38:57.974242 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:38:57.977189 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:38:57.984659 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:38:57.991097 dracut-cmdline[273]: dracut-dracut-053 Feb 13 20:38:57.992231 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:38:57.994580 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:38:58.021147 systemd-resolved[280]: Positive Trust Anchors: Feb 13 20:38:58.021159 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:38:58.021191 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:38:58.028690 systemd-resolved[280]: Defaulting to hostname 'linux'. Feb 13 20:38:58.029791 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:38:58.030669 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:38:58.059070 kernel: SCSI subsystem initialized Feb 13 20:38:58.064065 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:38:58.073083 kernel: iscsi: registered transport (tcp) Feb 13 20:38:58.087071 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:38:58.087088 kernel: QLogic iSCSI HBA Driver Feb 13 20:38:58.128725 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:38:58.139184 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:38:58.156675 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:38:58.156733 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:38:58.156745 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:38:58.202071 kernel: raid6: neonx8 gen() 15783 MB/s Feb 13 20:38:58.219060 kernel: raid6: neonx4 gen() 15616 MB/s Feb 13 20:38:58.236056 kernel: raid6: neonx2 gen() 13195 MB/s Feb 13 20:38:58.253054 kernel: raid6: neonx1 gen() 10491 MB/s Feb 13 20:38:58.270057 kernel: raid6: int64x8 gen() 6955 MB/s Feb 13 20:38:58.287059 kernel: raid6: int64x4 gen() 7352 MB/s Feb 13 20:38:58.304055 kernel: raid6: int64x2 gen() 6130 MB/s Feb 13 20:38:58.321053 kernel: raid6: int64x1 gen() 5055 MB/s Feb 13 20:38:58.321070 kernel: raid6: using algorithm neonx8 gen() 15783 MB/s Feb 13 20:38:58.338054 kernel: raid6: .... xor() 11909 MB/s, rmw enabled Feb 13 20:38:58.338073 kernel: raid6: using neon recovery algorithm Feb 13 20:38:58.345243 kernel: xor: measuring software checksum speed Feb 13 20:38:58.345261 kernel: 8regs : 19759 MB/sec Feb 13 20:38:58.345271 kernel: 32regs : 19664 MB/sec Feb 13 20:38:58.346179 kernel: arm64_neon : 26945 MB/sec Feb 13 20:38:58.346194 kernel: xor: using function: arm64_neon (26945 MB/sec) Feb 13 20:38:58.397073 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:38:58.408362 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:38:58.416221 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:38:58.428156 systemd-udevd[460]: Using default interface naming scheme 'v255'. Feb 13 20:38:58.431349 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:38:58.433762 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:38:58.449089 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Feb 13 20:38:58.479090 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:38:58.492226 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:38:58.532736 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:38:58.540238 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:38:58.554306 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:38:58.555734 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:38:58.556786 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:38:58.558730 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:38:58.570244 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:38:58.581792 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:38:58.585197 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:38:58.586600 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 20:38:58.592602 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 20:38:58.592720 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:38:58.592737 kernel: GPT:9289727 != 19775487 Feb 13 20:38:58.592748 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:38:58.592758 kernel: GPT:9289727 != 19775487 Feb 13 20:38:58.592775 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:38:58.592785 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:38:58.585311 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:38:58.588503 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:38:58.589286 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:38:58.589420 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:38:58.592333 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:38:58.600344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:38:58.610118 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (524) Feb 13 20:38:58.610167 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (517) Feb 13 20:38:58.611628 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:38:58.619388 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:38:58.623685 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:38:58.630003 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:38:58.630997 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:38:58.636087 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:38:58.649216 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:38:58.651283 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:38:58.655811 disk-uuid[550]: Primary Header is updated. Feb 13 20:38:58.655811 disk-uuid[550]: Secondary Entries is updated. Feb 13 20:38:58.655811 disk-uuid[550]: Secondary Header is updated. Feb 13 20:38:58.661067 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:38:58.672068 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:38:58.674119 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:38:59.673074 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:38:59.673393 disk-uuid[551]: The operation has completed successfully. Feb 13 20:38:59.695520 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:38:59.695624 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:38:59.723238 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:38:59.726285 sh[573]: Success Feb 13 20:38:59.740068 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:38:59.779545 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:38:59.781167 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:38:59.781924 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:38:59.793892 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 20:38:59.793950 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:38:59.793962 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:38:59.795082 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:38:59.795102 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:38:59.798573 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:38:59.799774 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:38:59.809228 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:38:59.810716 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:38:59.818810 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:38:59.818851 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:38:59.818863 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:38:59.822085 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:38:59.828968 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:38:59.830571 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:38:59.835292 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:38:59.846239 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:38:59.919141 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:38:59.934229 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:38:59.964457 systemd-networkd[761]: lo: Link UP Feb 13 20:38:59.964468 systemd-networkd[761]: lo: Gained carrier Feb 13 20:38:59.965248 systemd-networkd[761]: Enumeration completed Feb 13 20:38:59.965583 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:38:59.965847 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:38:59.965850 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:38:59.966636 systemd[1]: Reached target network.target - Network. Feb 13 20:38:59.968285 systemd-networkd[761]: eth0: Link UP Feb 13 20:38:59.968288 systemd-networkd[761]: eth0: Gained carrier Feb 13 20:38:59.968294 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:38:59.994107 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:39:00.003136 ignition[664]: Ignition 2.19.0 Feb 13 20:39:00.003146 ignition[664]: Stage: fetch-offline Feb 13 20:39:00.003180 ignition[664]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:39:00.003188 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:39:00.003367 ignition[664]: parsed url from cmdline: "" Feb 13 20:39:00.003371 ignition[664]: no config URL provided Feb 13 20:39:00.003376 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:39:00.003385 ignition[664]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:39:00.003409 ignition[664]: op(1): [started] loading QEMU firmware config module Feb 13 20:39:00.003413 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 20:39:00.009629 ignition[664]: op(1): [finished] loading QEMU firmware config module Feb 13 20:39:00.030644 ignition[664]: parsing config with SHA512: b03da8892db4224d4297fd8d4977063ac7e9f237e8ccf7a2a02c67bae76c923b4566c58554d918b6fba3cf57a10d9c6bf19593c675a14b10fce70dd31babb395 Feb 13 20:39:00.035641 unknown[664]: fetched base config from "system" Feb 13 20:39:00.035654 unknown[664]: fetched user config from "qemu" Feb 13 20:39:00.037345 ignition[664]: fetch-offline: fetch-offline passed Feb 13 20:39:00.037465 ignition[664]: Ignition finished successfully Feb 13 20:39:00.040111 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:39:00.041184 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:39:00.053232 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:39:00.065080 ignition[772]: Ignition 2.19.0 Feb 13 20:39:00.065091 ignition[772]: Stage: kargs Feb 13 20:39:00.065268 ignition[772]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:39:00.065278 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:39:00.066291 ignition[772]: kargs: kargs passed Feb 13 20:39:00.066338 ignition[772]: Ignition finished successfully Feb 13 20:39:00.070161 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:39:00.081178 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:39:00.087659 systemd-resolved[280]: Detected conflict on linux IN A 10.0.0.7 Feb 13 20:39:00.087680 systemd-resolved[280]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Feb 13 20:39:00.092019 ignition[780]: Ignition 2.19.0 Feb 13 20:39:00.092029 ignition[780]: Stage: disks Feb 13 20:39:00.092222 ignition[780]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:39:00.092233 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:39:00.093098 ignition[780]: disks: disks passed Feb 13 20:39:00.094265 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:39:00.093150 ignition[780]: Ignition finished successfully Feb 13 20:39:00.095892 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:39:00.097421 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:39:00.098761 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:39:00.100202 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:39:00.101756 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:39:00.112212 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:39:00.122515 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:39:00.126500 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:39:00.143141 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:39:00.182061 kernel: EXT4-fs (vda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 20:39:00.182386 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:39:00.183453 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:39:00.200126 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:39:00.201699 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:39:00.202648 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:39:00.202735 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:39:00.202785 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:39:00.209099 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Feb 13 20:39:00.208778 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:39:00.212517 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:39:00.212534 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:39:00.212544 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:39:00.211528 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:39:00.215034 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:39:00.216707 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:39:00.256493 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:39:00.260613 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:39:00.264675 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:39:00.268460 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:39:00.337344 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:39:00.346157 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:39:00.347558 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:39:00.352066 kernel: BTRFS info (device vda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:39:00.368087 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:39:00.369844 ignition[913]: INFO : Ignition 2.19.0 Feb 13 20:39:00.369844 ignition[913]: INFO : Stage: mount Feb 13 20:39:00.369844 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:39:00.369844 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:39:00.372674 ignition[913]: INFO : mount: mount passed Feb 13 20:39:00.372674 ignition[913]: INFO : Ignition finished successfully Feb 13 20:39:00.372199 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:39:00.381170 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:39:00.792598 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:39:00.801211 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:39:00.806055 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Feb 13 20:39:00.807517 kernel: BTRFS info (device vda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:39:00.807533 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:39:00.807543 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:39:00.810054 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:39:00.810960 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:39:00.825593 ignition[943]: INFO : Ignition 2.19.0 Feb 13 20:39:00.825593 ignition[943]: INFO : Stage: files Feb 13 20:39:00.826809 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:39:00.826809 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:39:00.826809 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:39:00.829295 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:39:00.829295 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:39:00.829295 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:39:00.829295 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:39:00.833145 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:39:00.833145 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:39:00.833145 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 20:39:00.829607 unknown[943]: wrote ssh authorized keys file for user: core Feb 13 20:39:00.874902 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:39:01.067488 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:39:01.067488 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:39:01.070101 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:39:01.070101 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:39:01.070101 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:39:01.070101 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:39:01.070101 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:39:01.070101 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:39:01.070101 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:39:01.070101 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:39:01.079571 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:39:01.079571 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:39:01.079571 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:39:01.079571 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:39:01.079571 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 20:39:01.388621 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:39:01.636936 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 20:39:01.636936 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:39:01.639855 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:39:01.639855 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:39:01.639855 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:39:01.639855 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 20:39:01.639855 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:39:01.639855 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:39:01.639855 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 20:39:01.639855 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 20:39:01.662139 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:39:01.666452 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:39:01.668741 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 20:39:01.668741 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:39:01.668741 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:39:01.668741 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:39:01.668741 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:39:01.668741 ignition[943]: INFO : files: files passed Feb 13 20:39:01.668741 ignition[943]: INFO : Ignition finished successfully Feb 13 20:39:01.669863 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:39:01.675209 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:39:01.678011 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:39:01.680592 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:39:01.681451 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:39:01.684812 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 20:39:01.688326 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:39:01.688326 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:39:01.691279 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:39:01.691775 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:39:01.693814 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:39:01.704251 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:39:01.723556 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:39:01.723683 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:39:01.725366 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:39:01.726755 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:39:01.728190 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:39:01.728969 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:39:01.745062 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:39:01.747448 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:39:01.758936 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:39:01.760948 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:39:01.762040 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:39:01.763548 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:39:01.763678 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:39:01.765840 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:39:01.767429 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:39:01.768740 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:39:01.770072 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:39:01.771718 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:39:01.773303 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:39:01.774764 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:39:01.776355 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:39:01.778033 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:39:01.779477 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:39:01.780725 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:39:01.780859 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:39:01.782712 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:39:01.784217 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:39:01.785730 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:39:01.789101 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:39:01.790026 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:39:01.790218 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:39:01.792247 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:39:01.792366 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:39:01.793946 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:39:01.795173 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:39:01.795281 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:39:01.796905 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:39:01.798095 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:39:01.799506 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:39:01.799597 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:39:01.801233 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:39:01.801315 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:39:01.802638 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:39:01.802756 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:39:01.804018 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:39:01.804136 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:39:01.818256 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:39:01.818939 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:39:01.819100 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:39:01.822235 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:39:01.823706 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:39:01.824626 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:39:01.825626 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:39:01.825732 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:39:01.827229 systemd-networkd[761]: eth0: Gained IPv6LL Feb 13 20:39:01.833874 ignition[997]: INFO : Ignition 2.19.0 Feb 13 20:39:01.833874 ignition[997]: INFO : Stage: umount Feb 13 20:39:01.833874 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:39:01.833874 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:39:01.832720 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:39:01.839781 ignition[997]: INFO : umount: umount passed Feb 13 20:39:01.839781 ignition[997]: INFO : Ignition finished successfully Feb 13 20:39:01.832808 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:39:01.835844 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:39:01.836298 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:39:01.836374 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:39:01.839337 systemd[1]: Stopped target network.target - Network. Feb 13 20:39:01.840464 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:39:01.840526 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:39:01.841772 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:39:01.841816 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:39:01.843170 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:39:01.843213 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:39:01.844775 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:39:01.844816 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:39:01.845959 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:39:01.847311 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:39:01.852105 systemd-networkd[761]: eth0: DHCPv6 lease lost Feb 13 20:39:01.853671 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:39:01.853789 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:39:01.856299 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:39:01.856419 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:39:01.858367 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:39:01.858427 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:39:01.869146 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:39:01.869887 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:39:01.869952 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:39:01.871586 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:39:01.871636 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:39:01.873099 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:39:01.873142 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:39:01.874832 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:39:01.874872 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:39:01.876639 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:39:01.885192 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:39:01.885305 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:39:01.892957 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:39:01.893190 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:39:01.895158 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:39:01.895198 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:39:01.896574 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:39:01.896605 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:39:01.898013 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:39:01.898066 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:39:01.900236 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:39:01.900279 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:39:01.902423 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:39:01.902465 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:39:01.915204 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:39:01.916019 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:39:01.916086 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:39:01.917922 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:39:01.917970 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:39:01.919823 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:39:01.919907 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:39:01.921264 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:39:01.921338 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:39:01.924886 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:39:01.926442 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:39:01.926510 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:39:01.928872 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:39:01.939228 systemd[1]: Switching root. Feb 13 20:39:01.962690 systemd-journald[239]: Journal stopped Feb 13 20:39:02.663960 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 20:39:02.664012 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:39:02.664028 kernel: SELinux: policy capability open_perms=1 Feb 13 20:39:02.664038 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:39:02.664060 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:39:02.664071 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:39:02.664082 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:39:02.664094 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:39:02.664105 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:39:02.664114 kernel: audit: type=1403 audit(1739479142.118:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:39:02.664126 systemd[1]: Successfully loaded SELinux policy in 33.177ms. Feb 13 20:39:02.664142 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.165ms. Feb 13 20:39:02.664154 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:39:02.664165 systemd[1]: Detected virtualization kvm. Feb 13 20:39:02.664175 systemd[1]: Detected architecture arm64. Feb 13 20:39:02.664187 systemd[1]: Detected first boot. Feb 13 20:39:02.664200 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:39:02.664210 zram_generator::config[1041]: No configuration found. Feb 13 20:39:02.664222 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:39:02.664232 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:39:02.664242 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:39:02.664253 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:39:02.664264 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:39:02.664274 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:39:02.664286 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:39:02.664296 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:39:02.664307 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:39:02.664318 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:39:02.664328 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:39:02.664339 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:39:02.664349 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:39:02.664359 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:39:02.664370 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:39:02.664382 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:39:02.664392 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:39:02.664403 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:39:02.664413 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 20:39:02.664428 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:39:02.664439 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:39:02.664450 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:39:02.664460 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:39:02.664473 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:39:02.664483 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:39:02.664494 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:39:02.664504 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:39:02.664515 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:39:02.664526 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:39:02.664536 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:39:02.664546 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:39:02.664558 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:39:02.664568 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:39:02.664579 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:39:02.664589 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:39:02.664600 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:39:02.664610 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:39:02.664620 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:39:02.664630 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:39:02.664648 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:39:02.664669 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:39:02.664680 systemd[1]: Reached target machines.target - Containers. Feb 13 20:39:02.664691 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:39:02.664701 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:39:02.664713 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:39:02.664723 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:39:02.664734 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:39:02.664744 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:39:02.664756 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:39:02.664767 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:39:02.664777 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:39:02.664788 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:39:02.664798 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:39:02.664809 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:39:02.664819 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:39:02.664829 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:39:02.664839 kernel: loop: module loaded Feb 13 20:39:02.664851 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:39:02.664862 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:39:02.664872 kernel: fuse: init (API version 7.39) Feb 13 20:39:02.664882 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:39:02.664894 kernel: ACPI: bus type drm_connector registered Feb 13 20:39:02.664903 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:39:02.664914 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:39:02.664924 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:39:02.664935 systemd[1]: Stopped verity-setup.service. Feb 13 20:39:02.664946 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:39:02.664957 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:39:02.664967 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:39:02.664994 systemd-journald[1108]: Collecting audit messages is disabled. Feb 13 20:39:02.665020 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:39:02.665031 systemd-journald[1108]: Journal started Feb 13 20:39:02.665070 systemd-journald[1108]: Runtime Journal (/run/log/journal/f498f856d3f74f629d53ba2727f625c2) is 5.9M, max 47.3M, 41.4M free. Feb 13 20:39:02.665108 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:39:02.475747 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:39:02.497770 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:39:02.498144 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:39:02.668181 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:39:02.668774 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:39:02.669813 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:39:02.672515 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:39:02.674144 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:39:02.674296 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:39:02.675774 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:39:02.675914 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:39:02.677468 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:39:02.677614 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:39:02.678996 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:39:02.680281 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:39:02.681784 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:39:02.681926 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:39:02.683314 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:39:02.683446 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:39:02.684948 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:39:02.687191 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:39:02.688475 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:39:02.699818 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:39:02.706137 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:39:02.708017 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:39:02.708905 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:39:02.708947 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:39:02.710715 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:39:02.712834 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:39:02.714754 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:39:02.715724 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:39:02.717377 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:39:02.719090 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:39:02.720022 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:39:02.723219 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:39:02.724164 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:39:02.728295 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:39:02.728981 systemd-journald[1108]: Time spent on flushing to /var/log/journal/f498f856d3f74f629d53ba2727f625c2 is 20.031ms for 855 entries. Feb 13 20:39:02.728981 systemd-journald[1108]: System Journal (/var/log/journal/f498f856d3f74f629d53ba2727f625c2) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:39:02.757664 systemd-journald[1108]: Received client request to flush runtime journal. Feb 13 20:39:02.757716 kernel: loop0: detected capacity change from 0 to 114328 Feb 13 20:39:02.731307 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:39:02.733332 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:39:02.738534 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:39:02.739752 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:39:02.740789 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:39:02.742165 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:39:02.743628 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:39:02.747574 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:39:02.762377 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:39:02.769477 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:39:02.771870 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:39:02.775600 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:39:02.776929 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:39:02.782400 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:39:02.789577 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:39:02.792926 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:39:02.793853 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:39:02.799082 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:39:02.814299 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Feb 13 20:39:02.814318 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Feb 13 20:39:02.819013 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:39:02.828498 kernel: loop1: detected capacity change from 0 to 194096 Feb 13 20:39:02.856073 kernel: loop2: detected capacity change from 0 to 114432 Feb 13 20:39:02.892081 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 20:39:02.896075 kernel: loop4: detected capacity change from 0 to 194096 Feb 13 20:39:02.902066 kernel: loop5: detected capacity change from 0 to 114432 Feb 13 20:39:02.905541 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 20:39:02.905934 (sd-merge)[1177]: Merged extensions into '/usr'. Feb 13 20:39:02.909750 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:39:02.909801 systemd[1]: Reloading... Feb 13 20:39:02.958232 zram_generator::config[1201]: No configuration found. Feb 13 20:39:03.002657 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:39:03.066056 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:39:03.101806 systemd[1]: Reloading finished in 191 ms. Feb 13 20:39:03.137373 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:39:03.138508 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:39:03.153224 systemd[1]: Starting ensure-sysext.service... Feb 13 20:39:03.154969 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:39:03.168412 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:39:03.168427 systemd[1]: Reloading... Feb 13 20:39:03.176963 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:39:03.177241 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:39:03.177885 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:39:03.178165 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Feb 13 20:39:03.178219 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Feb 13 20:39:03.180309 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:39:03.180321 systemd-tmpfiles[1239]: Skipping /boot Feb 13 20:39:03.187379 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:39:03.187393 systemd-tmpfiles[1239]: Skipping /boot Feb 13 20:39:03.214072 zram_generator::config[1272]: No configuration found. Feb 13 20:39:03.294003 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:39:03.330075 systemd[1]: Reloading finished in 161 ms. Feb 13 20:39:03.347040 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:39:03.366540 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:39:03.372778 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:39:03.375115 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:39:03.377158 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:39:03.382248 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:39:03.390147 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:39:03.392401 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:39:03.398021 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:39:03.403161 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:39:03.406199 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:39:03.412207 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:39:03.414236 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:39:03.421091 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:39:03.423368 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:39:03.425325 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:39:03.425490 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:39:03.426864 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:39:03.427035 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:39:03.427320 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Feb 13 20:39:03.428842 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:39:03.428986 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:39:03.438011 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:39:03.445388 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:39:03.450143 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:39:03.455651 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:39:03.455767 augenrules[1333]: No rules Feb 13 20:39:03.456614 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:39:03.459296 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:39:03.463883 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:39:03.465582 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:39:03.467027 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:39:03.468801 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:39:03.470796 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:39:03.471039 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:39:03.472320 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:39:03.474846 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:39:03.474988 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:39:03.476795 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:39:03.476922 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:39:03.497547 systemd[1]: Finished ensure-sysext.service. Feb 13 20:39:03.499590 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:39:03.510724 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 20:39:03.511835 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:39:03.521296 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:39:03.526273 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:39:03.529588 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:39:03.531791 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:39:03.532916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:39:03.535553 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:39:03.538333 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:39:03.540130 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:39:03.540596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:39:03.540775 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:39:03.541910 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:39:03.542053 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:39:03.543359 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:39:03.543481 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:39:03.547029 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:39:03.547284 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:39:03.553015 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:39:03.553219 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:39:03.572867 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1361) Feb 13 20:39:03.573131 systemd-resolved[1306]: Positive Trust Anchors: Feb 13 20:39:03.573149 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:39:03.573182 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:39:03.591492 systemd-resolved[1306]: Defaulting to hostname 'linux'. Feb 13 20:39:03.612368 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:39:03.625284 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:39:03.626249 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:39:03.627227 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:39:03.680351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:39:03.681522 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:39:03.684124 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:39:03.687772 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:39:03.696441 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:39:03.698686 systemd-networkd[1377]: lo: Link UP Feb 13 20:39:03.698698 systemd-networkd[1377]: lo: Gained carrier Feb 13 20:39:03.699565 systemd-networkd[1377]: Enumeration completed Feb 13 20:39:03.705261 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:39:03.705669 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:39:03.705679 systemd-networkd[1377]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:39:03.706554 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:39:03.707869 systemd[1]: Reached target network.target - Network. Feb 13 20:39:03.708526 systemd-networkd[1377]: eth0: Link UP Feb 13 20:39:03.708534 systemd-networkd[1377]: eth0: Gained carrier Feb 13 20:39:03.708551 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:39:03.710137 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:39:03.753140 systemd-networkd[1377]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:39:03.753503 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:39:03.754230 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:39:03.754544 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. Feb 13 20:39:03.308224 systemd-timesyncd[1378]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 20:39:03.318242 systemd-journald[1108]: Time jumped backwards, rotating. Feb 13 20:39:03.308287 systemd-timesyncd[1378]: Initial clock synchronization to Thu 2025-02-13 20:39:03.308113 UTC. Feb 13 20:39:03.308529 systemd-resolved[1306]: Clock change detected. Flushing caches. Feb 13 20:39:03.344469 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:39:03.345660 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:39:03.346548 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:39:03.347410 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:39:03.348458 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:39:03.350074 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:39:03.350986 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:39:03.351916 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:39:03.352800 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:39:03.352835 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:39:03.353713 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:39:03.355356 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:39:03.357721 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:39:03.373361 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:39:03.376276 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:39:03.377731 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:39:03.378699 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:39:03.379454 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:39:03.380219 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:39:03.380254 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:39:03.381241 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:39:03.382947 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:39:03.385124 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:39:03.387102 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:39:03.390811 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:39:03.392134 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:39:03.395879 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:39:03.398074 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:39:03.399443 jq[1411]: false Feb 13 20:39:03.405389 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:39:03.408033 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:39:03.417138 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:39:03.417299 extend-filesystems[1412]: Found loop3 Feb 13 20:39:03.417299 extend-filesystems[1412]: Found loop4 Feb 13 20:39:03.417299 extend-filesystems[1412]: Found loop5 Feb 13 20:39:03.417299 extend-filesystems[1412]: Found vda Feb 13 20:39:03.417299 extend-filesystems[1412]: Found vda1 Feb 13 20:39:03.417299 extend-filesystems[1412]: Found vda2 Feb 13 20:39:03.417299 extend-filesystems[1412]: Found vda3 Feb 13 20:39:03.417299 extend-filesystems[1412]: Found usr Feb 13 20:39:03.417299 extend-filesystems[1412]: Found vda4 Feb 13 20:39:03.417299 extend-filesystems[1412]: Found vda6 Feb 13 20:39:03.417299 extend-filesystems[1412]: Found vda7 Feb 13 20:39:03.417299 extend-filesystems[1412]: Found vda9 Feb 13 20:39:03.417299 extend-filesystems[1412]: Checking size of /dev/vda9 Feb 13 20:39:03.441170 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 20:39:03.433798 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:39:03.441287 extend-filesystems[1412]: Resized partition /dev/vda9 Feb 13 20:39:03.434340 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:39:03.446471 extend-filesystems[1428]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:39:03.438181 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:39:03.441090 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:39:03.443578 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:39:03.448623 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:39:03.448787 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:39:03.450844 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:39:03.451016 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:39:03.470261 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1351) Feb 13 20:39:03.460345 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:39:03.462035 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:39:03.475187 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 20:39:03.497644 jq[1433]: true Feb 13 20:39:03.512147 (ntainerd)[1447]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:39:03.513435 extend-filesystems[1428]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:39:03.513435 extend-filesystems[1428]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:39:03.513435 extend-filesystems[1428]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 20:39:03.518489 extend-filesystems[1412]: Resized filesystem in /dev/vda9 Feb 13 20:39:03.519829 jq[1445]: true Feb 13 20:39:03.514719 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:39:03.514964 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:39:03.528141 dbus-daemon[1410]: [system] SELinux support is enabled Feb 13 20:39:03.528409 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:39:03.528884 update_engine[1429]: I20250213 20:39:03.527406 1429 main.cc:92] Flatcar Update Engine starting Feb 13 20:39:03.532864 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:39:03.532918 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:39:03.534443 update_engine[1429]: I20250213 20:39:03.534394 1429 update_check_scheduler.cc:74] Next update check in 8m10s Feb 13 20:39:03.535226 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:39:03.535253 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:39:03.536683 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:39:03.537995 tar[1435]: linux-arm64/helm Feb 13 20:39:03.545166 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:39:03.547421 systemd-logind[1420]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 20:39:03.547648 systemd-logind[1420]: New seat seat0. Feb 13 20:39:03.551161 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:39:03.572232 bash[1466]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:39:03.576107 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:39:03.577580 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:39:03.617195 locksmithd[1459]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:39:03.748628 containerd[1447]: time="2025-02-13T20:39:03.747213706Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:39:03.779529 containerd[1447]: time="2025-02-13T20:39:03.779470586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.780930 containerd[1447]: time="2025-02-13T20:39:03.780896346Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:39:03.780930 containerd[1447]: time="2025-02-13T20:39:03.780927266Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:39:03.780994 containerd[1447]: time="2025-02-13T20:39:03.780955986Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:39:03.781571 containerd[1447]: time="2025-02-13T20:39:03.781105386Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:39:03.781571 containerd[1447]: time="2025-02-13T20:39:03.781129666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.781571 containerd[1447]: time="2025-02-13T20:39:03.781182546Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:39:03.781571 containerd[1447]: time="2025-02-13T20:39:03.781194906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.781571 containerd[1447]: time="2025-02-13T20:39:03.781347026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:39:03.781571 containerd[1447]: time="2025-02-13T20:39:03.781362186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.781571 containerd[1447]: time="2025-02-13T20:39:03.781374986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:39:03.781571 containerd[1447]: time="2025-02-13T20:39:03.781384946Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.781571 containerd[1447]: time="2025-02-13T20:39:03.781454146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.781777 containerd[1447]: time="2025-02-13T20:39:03.781645906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:39:03.781777 containerd[1447]: time="2025-02-13T20:39:03.781737426Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:39:03.781777 containerd[1447]: time="2025-02-13T20:39:03.781751186Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:39:03.781902 containerd[1447]: time="2025-02-13T20:39:03.781833506Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:39:03.781902 containerd[1447]: time="2025-02-13T20:39:03.781886506Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:39:03.785228 containerd[1447]: time="2025-02-13T20:39:03.785200746Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:39:03.785302 containerd[1447]: time="2025-02-13T20:39:03.785248146Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:39:03.785302 containerd[1447]: time="2025-02-13T20:39:03.785263706Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:39:03.785302 containerd[1447]: time="2025-02-13T20:39:03.785284066Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:39:03.785302 containerd[1447]: time="2025-02-13T20:39:03.785298066Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:39:03.785495 containerd[1447]: time="2025-02-13T20:39:03.785422266Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:39:03.786970 containerd[1447]: time="2025-02-13T20:39:03.786079106Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:39:03.786970 containerd[1447]: time="2025-02-13T20:39:03.786240426Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:39:03.786970 containerd[1447]: time="2025-02-13T20:39:03.786265946Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:39:03.786970 containerd[1447]: time="2025-02-13T20:39:03.786285506Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:39:03.786970 containerd[1447]: time="2025-02-13T20:39:03.786313346Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.786970 containerd[1447]: time="2025-02-13T20:39:03.786333146Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.786970 containerd[1447]: time="2025-02-13T20:39:03.786351066Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.786970 containerd[1447]: time="2025-02-13T20:39:03.786366746Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.786970 containerd[1447]: time="2025-02-13T20:39:03.786386026Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.786970 containerd[1447]: time="2025-02-13T20:39:03.786404186Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.786970 containerd[1447]: time="2025-02-13T20:39:03.786421106Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.786970 containerd[1447]: time="2025-02-13T20:39:03.786438026Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:39:03.786970 containerd[1447]: time="2025-02-13T20:39:03.786463426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.786970 containerd[1447]: time="2025-02-13T20:39:03.786483146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.787242 containerd[1447]: time="2025-02-13T20:39:03.786501186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.787242 containerd[1447]: time="2025-02-13T20:39:03.786531826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.787242 containerd[1447]: time="2025-02-13T20:39:03.786550026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.787242 containerd[1447]: time="2025-02-13T20:39:03.786568826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.787242 containerd[1447]: time="2025-02-13T20:39:03.786586946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.787242 containerd[1447]: time="2025-02-13T20:39:03.786606346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.787242 containerd[1447]: time="2025-02-13T20:39:03.786626426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.787242 containerd[1447]: time="2025-02-13T20:39:03.786648586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.787242 containerd[1447]: time="2025-02-13T20:39:03.786667746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.787242 containerd[1447]: time="2025-02-13T20:39:03.786684586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.787242 containerd[1447]: time="2025-02-13T20:39:03.786700746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.787242 containerd[1447]: time="2025-02-13T20:39:03.786721586Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:39:03.787242 containerd[1447]: time="2025-02-13T20:39:03.786746826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.787242 containerd[1447]: time="2025-02-13T20:39:03.786763186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.787242 containerd[1447]: time="2025-02-13T20:39:03.786777666Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:39:03.787468 containerd[1447]: time="2025-02-13T20:39:03.786907946Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:39:03.787468 containerd[1447]: time="2025-02-13T20:39:03.786945306Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:39:03.787468 containerd[1447]: time="2025-02-13T20:39:03.786962346Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:39:03.787468 containerd[1447]: time="2025-02-13T20:39:03.786980386Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:39:03.787468 containerd[1447]: time="2025-02-13T20:39:03.786995746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.787468 containerd[1447]: time="2025-02-13T20:39:03.787012786Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:39:03.787468 containerd[1447]: time="2025-02-13T20:39:03.787024946Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:39:03.787468 containerd[1447]: time="2025-02-13T20:39:03.787039186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:39:03.787656 containerd[1447]: time="2025-02-13T20:39:03.787347946Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:39:03.787656 containerd[1447]: time="2025-02-13T20:39:03.787414226Z" level=info msg="Connect containerd service" Feb 13 20:39:03.787656 containerd[1447]: time="2025-02-13T20:39:03.787451866Z" level=info msg="using legacy CRI server" Feb 13 20:39:03.787656 containerd[1447]: time="2025-02-13T20:39:03.787459386Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:39:03.787656 containerd[1447]: time="2025-02-13T20:39:03.787560546Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:39:03.788301 containerd[1447]: time="2025-02-13T20:39:03.788268826Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:39:03.788507 containerd[1447]: time="2025-02-13T20:39:03.788472066Z" level=info msg="Start subscribing containerd event" Feb 13 20:39:03.788759 containerd[1447]: time="2025-02-13T20:39:03.788538426Z" level=info msg="Start recovering state" Feb 13 20:39:03.788759 containerd[1447]: time="2025-02-13T20:39:03.788611386Z" level=info msg="Start event monitor" Feb 13 20:39:03.788759 containerd[1447]: time="2025-02-13T20:39:03.788624386Z" level=info msg="Start snapshots syncer" Feb 13 20:39:03.788759 containerd[1447]: time="2025-02-13T20:39:03.788634906Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:39:03.788759 containerd[1447]: time="2025-02-13T20:39:03.788654306Z" level=info msg="Start streaming server" Feb 13 20:39:03.790250 containerd[1447]: time="2025-02-13T20:39:03.789108866Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:39:03.790250 containerd[1447]: time="2025-02-13T20:39:03.789288746Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:39:03.790250 containerd[1447]: time="2025-02-13T20:39:03.789437306Z" level=info msg="containerd successfully booted in 0.044937s" Feb 13 20:39:03.789584 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:39:03.892810 tar[1435]: linux-arm64/LICENSE Feb 13 20:39:03.892924 tar[1435]: linux-arm64/README.md Feb 13 20:39:03.905005 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:39:04.037385 sshd_keygen[1425]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:39:04.056430 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:39:04.068176 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:39:04.073418 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:39:04.073617 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:39:04.078061 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:39:04.091856 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:39:04.094910 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:39:04.097186 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 20:39:04.098140 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:39:04.836091 systemd-networkd[1377]: eth0: Gained IPv6LL Feb 13 20:39:04.838345 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:39:04.840208 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:39:04.855320 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 20:39:04.857410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:04.861179 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:39:04.874465 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 20:39:04.874690 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 20:39:04.876318 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:39:04.879264 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:39:05.344513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:05.345787 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:39:05.348334 (kubelet)[1525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:39:05.348517 systemd[1]: Startup finished in 572ms (kernel) + 4.391s (initrd) + 3.709s (userspace) = 8.672s. Feb 13 20:39:05.823817 kubelet[1525]: E0213 20:39:05.823772 1525 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:39:05.826480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:39:05.826636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:39:09.603211 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:39:09.604553 systemd[1]: Started sshd@0-10.0.0.7:22-10.0.0.1:38670.service - OpenSSH per-connection server daemon (10.0.0.1:38670). Feb 13 20:39:09.657980 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 38670 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:39:09.659797 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:39:09.667490 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:39:09.679162 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:39:09.680623 systemd-logind[1420]: New session 1 of user core. Feb 13 20:39:09.687927 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:39:09.690061 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:39:09.696299 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:39:09.771229 systemd[1543]: Queued start job for default target default.target. Feb 13 20:39:09.783852 systemd[1543]: Created slice app.slice - User Application Slice. Feb 13 20:39:09.783895 systemd[1543]: Reached target paths.target - Paths. Feb 13 20:39:09.783907 systemd[1543]: Reached target timers.target - Timers. Feb 13 20:39:09.785140 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:39:09.794990 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:39:09.795042 systemd[1543]: Reached target sockets.target - Sockets. Feb 13 20:39:09.795054 systemd[1543]: Reached target basic.target - Basic System. Feb 13 20:39:09.795090 systemd[1543]: Reached target default.target - Main User Target. Feb 13 20:39:09.795116 systemd[1543]: Startup finished in 94ms. Feb 13 20:39:09.795420 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:39:09.796694 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:39:09.858330 systemd[1]: Started sshd@1-10.0.0.7:22-10.0.0.1:38676.service - OpenSSH per-connection server daemon (10.0.0.1:38676). Feb 13 20:39:09.892578 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 38676 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:39:09.893794 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:39:09.897834 systemd-logind[1420]: New session 2 of user core. Feb 13 20:39:09.913094 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:39:09.965877 sshd[1554]: pam_unix(sshd:session): session closed for user core Feb 13 20:39:09.979239 systemd[1]: sshd@1-10.0.0.7:22-10.0.0.1:38676.service: Deactivated successfully. Feb 13 20:39:09.982204 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:39:09.983630 systemd-logind[1420]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:39:09.984721 systemd[1]: Started sshd@2-10.0.0.7:22-10.0.0.1:38692.service - OpenSSH per-connection server daemon (10.0.0.1:38692). Feb 13 20:39:09.985413 systemd-logind[1420]: Removed session 2. Feb 13 20:39:10.018967 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 38692 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:39:10.020155 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:39:10.024958 systemd-logind[1420]: New session 3 of user core. Feb 13 20:39:10.034124 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:39:10.081427 sshd[1561]: pam_unix(sshd:session): session closed for user core Feb 13 20:39:10.096214 systemd[1]: sshd@2-10.0.0.7:22-10.0.0.1:38692.service: Deactivated successfully. Feb 13 20:39:10.097557 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:39:10.099973 systemd-logind[1420]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:39:10.100986 systemd[1]: Started sshd@3-10.0.0.7:22-10.0.0.1:38694.service - OpenSSH per-connection server daemon (10.0.0.1:38694). Feb 13 20:39:10.101674 systemd-logind[1420]: Removed session 3. Feb 13 20:39:10.135633 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 38694 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:39:10.136818 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:39:10.140632 systemd-logind[1420]: New session 4 of user core. Feb 13 20:39:10.152091 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:39:10.205049 sshd[1568]: pam_unix(sshd:session): session closed for user core Feb 13 20:39:10.213189 systemd[1]: sshd@3-10.0.0.7:22-10.0.0.1:38694.service: Deactivated successfully. Feb 13 20:39:10.214624 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:39:10.217054 systemd-logind[1420]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:39:10.218125 systemd[1]: Started sshd@4-10.0.0.7:22-10.0.0.1:38702.service - OpenSSH per-connection server daemon (10.0.0.1:38702). Feb 13 20:39:10.218842 systemd-logind[1420]: Removed session 4. Feb 13 20:39:10.251478 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 38702 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:39:10.252638 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:39:10.256453 systemd-logind[1420]: New session 5 of user core. Feb 13 20:39:10.265077 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:39:10.321413 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:39:10.321711 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:39:10.660174 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:39:10.660320 (dockerd)[1596]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:39:10.981286 dockerd[1596]: time="2025-02-13T20:39:10.981148386Z" level=info msg="Starting up" Feb 13 20:39:11.202023 dockerd[1596]: time="2025-02-13T20:39:11.201972546Z" level=info msg="Loading containers: start." Feb 13 20:39:11.358010 kernel: Initializing XFRM netlink socket Feb 13 20:39:11.428298 systemd-networkd[1377]: docker0: Link UP Feb 13 20:39:11.454444 dockerd[1596]: time="2025-02-13T20:39:11.454377906Z" level=info msg="Loading containers: done." Feb 13 20:39:11.467517 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3207923777-merged.mount: Deactivated successfully. Feb 13 20:39:11.469648 dockerd[1596]: time="2025-02-13T20:39:11.469594386Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:39:11.469732 dockerd[1596]: time="2025-02-13T20:39:11.469707266Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:39:11.469840 dockerd[1596]: time="2025-02-13T20:39:11.469810866Z" level=info msg="Daemon has completed initialization" Feb 13 20:39:11.496600 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:39:11.496821 dockerd[1596]: time="2025-02-13T20:39:11.496518666Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:39:12.345704 containerd[1447]: time="2025-02-13T20:39:12.345651986Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 20:39:13.025782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1229926911.mount: Deactivated successfully. Feb 13 20:39:15.043216 containerd[1447]: time="2025-02-13T20:39:15.043165266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:15.044275 containerd[1447]: time="2025-02-13T20:39:15.044050626Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 20:39:15.045479 containerd[1447]: time="2025-02-13T20:39:15.045437426Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:15.048044 containerd[1447]: time="2025-02-13T20:39:15.048009746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:15.049493 containerd[1447]: time="2025-02-13T20:39:15.049250586Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.70354912s" Feb 13 20:39:15.049493 containerd[1447]: time="2025-02-13T20:39:15.049290346Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 20:39:15.068569 containerd[1447]: time="2025-02-13T20:39:15.068531346Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 20:39:15.864730 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:39:15.876168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:15.966506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:15.970086 (kubelet)[1819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:39:16.010505 kubelet[1819]: E0213 20:39:16.010455 1819 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:39:16.013417 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:39:16.013560 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:39:17.601820 containerd[1447]: time="2025-02-13T20:39:17.601750906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:17.602288 containerd[1447]: time="2025-02-13T20:39:17.602254986Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 20:39:17.603075 containerd[1447]: time="2025-02-13T20:39:17.603048746Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:17.606269 containerd[1447]: time="2025-02-13T20:39:17.606224986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:17.607239 containerd[1447]: time="2025-02-13T20:39:17.607215346Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 2.53864756s" Feb 13 20:39:17.607282 containerd[1447]: time="2025-02-13T20:39:17.607245186Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 20:39:17.627901 containerd[1447]: time="2025-02-13T20:39:17.627873226Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 20:39:18.802451 containerd[1447]: time="2025-02-13T20:39:18.802392186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:18.815538 containerd[1447]: time="2025-02-13T20:39:18.815489266Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 20:39:18.906278 containerd[1447]: time="2025-02-13T20:39:18.906191706Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:18.913918 containerd[1447]: time="2025-02-13T20:39:18.913854746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:18.914911 containerd[1447]: time="2025-02-13T20:39:18.914879706Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.286972s" Feb 13 20:39:18.914991 containerd[1447]: time="2025-02-13T20:39:18.914915106Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 20:39:18.933537 containerd[1447]: time="2025-02-13T20:39:18.933434266Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:39:20.116044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount87328753.mount: Deactivated successfully. Feb 13 20:39:20.321685 containerd[1447]: time="2025-02-13T20:39:20.321634786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:20.322228 containerd[1447]: time="2025-02-13T20:39:20.322191466Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 20:39:20.322742 containerd[1447]: time="2025-02-13T20:39:20.322719506Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:20.324683 containerd[1447]: time="2025-02-13T20:39:20.324649706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:20.326052 containerd[1447]: time="2025-02-13T20:39:20.326021386Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.39254196s" Feb 13 20:39:20.326052 containerd[1447]: time="2025-02-13T20:39:20.326052826Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 20:39:20.344809 containerd[1447]: time="2025-02-13T20:39:20.344773506Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:39:20.950744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3114988182.mount: Deactivated successfully. Feb 13 20:39:22.071779 containerd[1447]: time="2025-02-13T20:39:22.071728386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:22.073291 containerd[1447]: time="2025-02-13T20:39:22.073239386Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 20:39:22.073962 containerd[1447]: time="2025-02-13T20:39:22.073882826Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:22.077028 containerd[1447]: time="2025-02-13T20:39:22.076982066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:22.078158 containerd[1447]: time="2025-02-13T20:39:22.078053546Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.73323992s" Feb 13 20:39:22.078158 containerd[1447]: time="2025-02-13T20:39:22.078084306Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 20:39:22.095521 containerd[1447]: time="2025-02-13T20:39:22.095490666Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 20:39:22.510010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1820050480.mount: Deactivated successfully. Feb 13 20:39:22.514227 containerd[1447]: time="2025-02-13T20:39:22.514176226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:22.514700 containerd[1447]: time="2025-02-13T20:39:22.514662346Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 20:39:22.516069 containerd[1447]: time="2025-02-13T20:39:22.516032386Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:22.518152 containerd[1447]: time="2025-02-13T20:39:22.518125186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:22.519176 containerd[1447]: time="2025-02-13T20:39:22.519145706Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 423.62136ms" Feb 13 20:39:22.519230 containerd[1447]: time="2025-02-13T20:39:22.519176706Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 20:39:22.537312 containerd[1447]: time="2025-02-13T20:39:22.537275186Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 20:39:23.048503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3404901923.mount: Deactivated successfully. Feb 13 20:39:26.114689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:39:26.120159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:26.207979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:26.212236 (kubelet)[1974]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:39:26.256869 kubelet[1974]: E0213 20:39:26.256798 1974 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:39:26.259523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:39:26.259654 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:39:26.721903 containerd[1447]: time="2025-02-13T20:39:26.721848666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:26.722831 containerd[1447]: time="2025-02-13T20:39:26.722602826Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 20:39:26.723604 containerd[1447]: time="2025-02-13T20:39:26.723362826Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:26.726361 containerd[1447]: time="2025-02-13T20:39:26.726328266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:39:26.727832 containerd[1447]: time="2025-02-13T20:39:26.727741706Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.19042972s" Feb 13 20:39:26.727832 containerd[1447]: time="2025-02-13T20:39:26.727777106Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 20:39:31.716082 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:31.730153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:31.743266 systemd[1]: Reloading requested from client PID 2065 ('systemctl') (unit session-5.scope)... Feb 13 20:39:31.743278 systemd[1]: Reloading... Feb 13 20:39:31.800045 zram_generator::config[2101]: No configuration found. Feb 13 20:39:31.915915 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:39:31.966790 systemd[1]: Reloading finished in 223 ms. Feb 13 20:39:32.006021 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:32.009228 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:39:32.009410 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:32.010679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:32.104154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:32.108586 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:39:32.151716 kubelet[2151]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:39:32.151716 kubelet[2151]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:39:32.151716 kubelet[2151]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:39:32.152060 kubelet[2151]: I0213 20:39:32.151815 2151 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:39:32.854883 kubelet[2151]: I0213 20:39:32.854841 2151 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:39:32.854883 kubelet[2151]: I0213 20:39:32.854869 2151 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:39:32.855084 kubelet[2151]: I0213 20:39:32.855067 2151 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:39:32.909752 kubelet[2151]: E0213 20:39:32.909713 2151 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:39:32.909752 kubelet[2151]: I0213 20:39:32.909757 2151 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:39:32.927120 kubelet[2151]: I0213 20:39:32.927088 2151 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:39:32.927511 kubelet[2151]: I0213 20:39:32.927477 2151 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:39:32.927660 kubelet[2151]: I0213 20:39:32.927505 2151 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:39:32.927745 kubelet[2151]: I0213 20:39:32.927736 2151 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:39:32.927775 kubelet[2151]: I0213 20:39:32.927747 2151 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:39:32.928064 kubelet[2151]: I0213 20:39:32.928051 2151 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:39:32.929606 kubelet[2151]: I0213 20:39:32.929587 2151 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:39:32.929654 kubelet[2151]: I0213 20:39:32.929623 2151 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:39:32.931062 kubelet[2151]: I0213 20:39:32.930438 2151 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:39:32.931062 kubelet[2151]: I0213 20:39:32.930709 2151 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:39:32.931421 kubelet[2151]: W0213 20:39:32.931147 2151 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:39:32.931421 kubelet[2151]: E0213 20:39:32.931203 2151 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:39:32.931629 kubelet[2151]: W0213 20:39:32.931592 2151 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:39:32.931725 kubelet[2151]: E0213 20:39:32.931713 2151 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:39:32.933215 kubelet[2151]: I0213 20:39:32.933152 2151 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:39:32.934281 kubelet[2151]: I0213 20:39:32.934261 2151 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:39:32.934566 kubelet[2151]: W0213 20:39:32.934546 2151 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:39:32.935513 kubelet[2151]: I0213 20:39:32.935487 2151 server.go:1264] "Started kubelet" Feb 13 20:39:32.936161 kubelet[2151]: I0213 20:39:32.935606 2151 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:39:32.936544 kubelet[2151]: I0213 20:39:32.936502 2151 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:39:32.936786 kubelet[2151]: I0213 20:39:32.936771 2151 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:39:32.936849 kubelet[2151]: I0213 20:39:32.936832 2151 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:39:32.938654 kubelet[2151]: I0213 20:39:32.938568 2151 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:39:32.939026 kubelet[2151]: I0213 20:39:32.939011 2151 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:39:32.940089 kubelet[2151]: I0213 20:39:32.940064 2151 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:39:32.943264 kubelet[2151]: I0213 20:39:32.943003 2151 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:39:32.943329 kubelet[2151]: W0213 20:39:32.943279 2151 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:39:32.943329 kubelet[2151]: E0213 20:39:32.943155 2151 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823df1775024bca default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:39:32.935470026 +0000 UTC m=+0.823673161,LastTimestamp:2025-02-13 20:39:32.935470026 +0000 UTC m=+0.823673161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:39:32.943329 kubelet[2151]: E0213 20:39:32.943317 2151 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:39:32.943497 kubelet[2151]: E0213 20:39:32.943335 2151 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:32.944346 kubelet[2151]: E0213 20:39:32.944304 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="200ms" Feb 13 20:39:32.944851 kubelet[2151]: I0213 20:39:32.944797 2151 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:39:32.945169 kubelet[2151]: I0213 20:39:32.945106 2151 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:39:32.946286 kubelet[2151]: I0213 20:39:32.946248 2151 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:39:32.946651 kubelet[2151]: E0213 20:39:32.946627 2151 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:39:32.952142 kubelet[2151]: I0213 20:39:32.952106 2151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:39:32.953147 kubelet[2151]: I0213 20:39:32.953105 2151 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:39:32.953278 kubelet[2151]: I0213 20:39:32.953259 2151 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:39:32.953305 kubelet[2151]: I0213 20:39:32.953281 2151 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:39:32.953335 kubelet[2151]: E0213 20:39:32.953321 2151 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:39:32.957856 kubelet[2151]: W0213 20:39:32.957789 2151 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:39:32.957856 kubelet[2151]: E0213 20:39:32.957831 2151 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:39:32.958252 kubelet[2151]: I0213 20:39:32.958230 2151 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:39:32.958252 kubelet[2151]: I0213 20:39:32.958249 2151 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:39:32.958305 kubelet[2151]: I0213 20:39:32.958272 2151 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:39:33.019693 kubelet[2151]: I0213 20:39:33.019639 2151 policy_none.go:49] "None policy: Start" Feb 13 20:39:33.020448 kubelet[2151]: I0213 20:39:33.020406 2151 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:39:33.020448 kubelet[2151]: I0213 20:39:33.020435 2151 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:39:33.027841 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:39:33.040309 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:39:33.043273 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:39:33.044320 kubelet[2151]: I0213 20:39:33.044287 2151 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:39:33.044906 kubelet[2151]: E0213 20:39:33.044874 2151 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Feb 13 20:39:33.054053 kubelet[2151]: E0213 20:39:33.054026 2151 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:39:33.055773 kubelet[2151]: I0213 20:39:33.055733 2151 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:39:33.056192 kubelet[2151]: I0213 20:39:33.056144 2151 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:39:33.056568 kubelet[2151]: I0213 20:39:33.056286 2151 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:39:33.060481 kubelet[2151]: E0213 20:39:33.060446 2151 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 20:39:33.145721 kubelet[2151]: E0213 20:39:33.145671 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="400ms" Feb 13 20:39:33.246228 kubelet[2151]: I0213 20:39:33.246186 2151 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:39:33.246893 kubelet[2151]: E0213 20:39:33.246861 2151 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Feb 13 20:39:33.255214 kubelet[2151]: I0213 20:39:33.254984 2151 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 20:39:33.256106 kubelet[2151]: I0213 20:39:33.256004 2151 topology_manager.go:215] "Topology Admit Handler" podUID="2e7e4b5f5930bf1bb943a9e09aa7d492" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 20:39:33.257211 kubelet[2151]: I0213 20:39:33.257139 2151 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 20:39:33.263576 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 20:39:33.290465 systemd[1]: Created slice kubepods-burstable-pod2e7e4b5f5930bf1bb943a9e09aa7d492.slice - libcontainer container kubepods-burstable-pod2e7e4b5f5930bf1bb943a9e09aa7d492.slice. Feb 13 20:39:33.294853 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 20:39:33.344619 kubelet[2151]: I0213 20:39:33.344582 2151 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:33.344619 kubelet[2151]: I0213 20:39:33.344625 2151 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:33.344619 kubelet[2151]: I0213 20:39:33.344654 2151 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e7e4b5f5930bf1bb943a9e09aa7d492-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2e7e4b5f5930bf1bb943a9e09aa7d492\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:33.344619 kubelet[2151]: I0213 20:39:33.344672 2151 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:33.344619 kubelet[2151]: I0213 20:39:33.344697 2151 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:33.344981 kubelet[2151]: I0213 20:39:33.344715 2151 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:33.344981 kubelet[2151]: I0213 20:39:33.344733 2151 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:39:33.344981 kubelet[2151]: I0213 20:39:33.344748 2151 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e7e4b5f5930bf1bb943a9e09aa7d492-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e7e4b5f5930bf1bb943a9e09aa7d492\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:33.344981 kubelet[2151]: I0213 20:39:33.344764 2151 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e7e4b5f5930bf1bb943a9e09aa7d492-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e7e4b5f5930bf1bb943a9e09aa7d492\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:33.546821 kubelet[2151]: E0213 20:39:33.546687 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="800ms" Feb 13 20:39:33.588561 kubelet[2151]: E0213 20:39:33.588472 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:33.589159 containerd[1447]: time="2025-02-13T20:39:33.589122306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 20:39:33.593394 kubelet[2151]: E0213 20:39:33.593356 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:33.593701 containerd[1447]: time="2025-02-13T20:39:33.593674586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2e7e4b5f5930bf1bb943a9e09aa7d492,Namespace:kube-system,Attempt:0,}" Feb 13 20:39:33.597443 kubelet[2151]: E0213 20:39:33.597415 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:33.601310 containerd[1447]: time="2025-02-13T20:39:33.601283946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 20:39:33.648472 kubelet[2151]: I0213 20:39:33.648431 2151 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:39:33.648746 kubelet[2151]: E0213 20:39:33.648723 2151 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Feb 13 20:39:33.898358 kubelet[2151]: W0213 20:39:33.898275 2151 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:39:33.898358 kubelet[2151]: E0213 20:39:33.898347 2151 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:39:33.914087 kubelet[2151]: W0213 20:39:33.914035 2151 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:39:33.914087 kubelet[2151]: E0213 20:39:33.914086 2151 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:39:34.065675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248901517.mount: Deactivated successfully. Feb 13 20:39:34.072438 containerd[1447]: time="2025-02-13T20:39:34.072370066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:39:34.074101 containerd[1447]: time="2025-02-13T20:39:34.074054226Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:39:34.074741 containerd[1447]: time="2025-02-13T20:39:34.074707626Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:39:34.075714 containerd[1447]: time="2025-02-13T20:39:34.075678866Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:39:34.076200 containerd[1447]: time="2025-02-13T20:39:34.076167226Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:39:34.077060 containerd[1447]: time="2025-02-13T20:39:34.077008506Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:39:34.077584 containerd[1447]: time="2025-02-13T20:39:34.077550586Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 20:39:34.079468 containerd[1447]: time="2025-02-13T20:39:34.079437546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:39:34.082519 containerd[1447]: time="2025-02-13T20:39:34.082116986Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 488.38444ms" Feb 13 20:39:34.083668 containerd[1447]: time="2025-02-13T20:39:34.083636946Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 482.1552ms" Feb 13 20:39:34.086074 containerd[1447]: time="2025-02-13T20:39:34.086043066Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 496.83956ms" Feb 13 20:39:34.263311 containerd[1447]: time="2025-02-13T20:39:34.262493586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:39:34.263311 containerd[1447]: time="2025-02-13T20:39:34.262553786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:39:34.263311 containerd[1447]: time="2025-02-13T20:39:34.262568786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:34.263311 containerd[1447]: time="2025-02-13T20:39:34.262646346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:34.263311 containerd[1447]: time="2025-02-13T20:39:34.262720106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:39:34.263311 containerd[1447]: time="2025-02-13T20:39:34.262767426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:39:34.263311 containerd[1447]: time="2025-02-13T20:39:34.262784666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:34.263311 containerd[1447]: time="2025-02-13T20:39:34.262850426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:34.263649 containerd[1447]: time="2025-02-13T20:39:34.263383626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:39:34.263649 containerd[1447]: time="2025-02-13T20:39:34.263439466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:39:34.263649 containerd[1447]: time="2025-02-13T20:39:34.263450786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:34.263649 containerd[1447]: time="2025-02-13T20:39:34.263524666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:34.279125 systemd[1]: Started cri-containerd-8a200b0abf17e5ba889f410a4cc405cdfb03c542119c97dbdc03544de71eb394.scope - libcontainer container 8a200b0abf17e5ba889f410a4cc405cdfb03c542119c97dbdc03544de71eb394. Feb 13 20:39:34.283429 systemd[1]: Started cri-containerd-2824c7b8d4e15885a3cc1082912ad714777cb77cdd2a7a107c4c930bad1f854a.scope - libcontainer container 2824c7b8d4e15885a3cc1082912ad714777cb77cdd2a7a107c4c930bad1f854a. Feb 13 20:39:34.284446 systemd[1]: Started cri-containerd-3662a1faba58767594e5c72912c1c98166b41c0fb8cb615f1d3815a6a4b5c82b.scope - libcontainer container 3662a1faba58767594e5c72912c1c98166b41c0fb8cb615f1d3815a6a4b5c82b. Feb 13 20:39:34.309103 containerd[1447]: time="2025-02-13T20:39:34.308803306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a200b0abf17e5ba889f410a4cc405cdfb03c542119c97dbdc03544de71eb394\"" Feb 13 20:39:34.309755 kubelet[2151]: E0213 20:39:34.309731 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:34.313379 containerd[1447]: time="2025-02-13T20:39:34.313344786Z" level=info msg="CreateContainer within sandbox \"8a200b0abf17e5ba889f410a4cc405cdfb03c542119c97dbdc03544de71eb394\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:39:34.320546 containerd[1447]: time="2025-02-13T20:39:34.320500386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2e7e4b5f5930bf1bb943a9e09aa7d492,Namespace:kube-system,Attempt:0,} returns sandbox id \"3662a1faba58767594e5c72912c1c98166b41c0fb8cb615f1d3815a6a4b5c82b\"" Feb 13 20:39:34.321121 kubelet[2151]: E0213 20:39:34.321103 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:34.321469 containerd[1447]: time="2025-02-13T20:39:34.321398066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"2824c7b8d4e15885a3cc1082912ad714777cb77cdd2a7a107c4c930bad1f854a\"" Feb 13 20:39:34.321979 kubelet[2151]: E0213 20:39:34.321918 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:34.323781 containerd[1447]: time="2025-02-13T20:39:34.323657906Z" level=info msg="CreateContainer within sandbox \"2824c7b8d4e15885a3cc1082912ad714777cb77cdd2a7a107c4c930bad1f854a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:39:34.323781 containerd[1447]: time="2025-02-13T20:39:34.323686906Z" level=info msg="CreateContainer within sandbox \"3662a1faba58767594e5c72912c1c98166b41c0fb8cb615f1d3815a6a4b5c82b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:39:34.325475 kubelet[2151]: W0213 20:39:34.325419 2151 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:39:34.325475 kubelet[2151]: E0213 20:39:34.325480 2151 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:39:34.332629 containerd[1447]: time="2025-02-13T20:39:34.332591986Z" level=info msg="CreateContainer within sandbox \"8a200b0abf17e5ba889f410a4cc405cdfb03c542119c97dbdc03544de71eb394\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2e80db774c639824ec7bce7b1be4663878c63c79f1b1315fee0f317ec9143c5a\"" Feb 13 20:39:34.333545 containerd[1447]: time="2025-02-13T20:39:34.333515866Z" level=info msg="StartContainer for \"2e80db774c639824ec7bce7b1be4663878c63c79f1b1315fee0f317ec9143c5a\"" Feb 13 20:39:34.341001 containerd[1447]: time="2025-02-13T20:39:34.340967146Z" level=info msg="CreateContainer within sandbox \"3662a1faba58767594e5c72912c1c98166b41c0fb8cb615f1d3815a6a4b5c82b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6f75505265dffdb33c47f2608011629f8be8e4fb1258548d7ed32ba860295309\"" Feb 13 20:39:34.341600 containerd[1447]: time="2025-02-13T20:39:34.341576266Z" level=info msg="StartContainer for \"6f75505265dffdb33c47f2608011629f8be8e4fb1258548d7ed32ba860295309\"" Feb 13 20:39:34.341951 containerd[1447]: time="2025-02-13T20:39:34.341901706Z" level=info msg="CreateContainer within sandbox \"2824c7b8d4e15885a3cc1082912ad714777cb77cdd2a7a107c4c930bad1f854a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ef9ebd426516ad21ef77cbd440a3cfad3134328fbfebc2c6c7f6db4dece0239d\"" Feb 13 20:39:34.342315 containerd[1447]: time="2025-02-13T20:39:34.342292106Z" level=info msg="StartContainer for \"ef9ebd426516ad21ef77cbd440a3cfad3134328fbfebc2c6c7f6db4dece0239d\"" Feb 13 20:39:34.347878 kubelet[2151]: E0213 20:39:34.347825 2151 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="1.6s" Feb 13 20:39:34.359105 systemd[1]: Started cri-containerd-2e80db774c639824ec7bce7b1be4663878c63c79f1b1315fee0f317ec9143c5a.scope - libcontainer container 2e80db774c639824ec7bce7b1be4663878c63c79f1b1315fee0f317ec9143c5a. Feb 13 20:39:34.371079 systemd[1]: Started cri-containerd-6f75505265dffdb33c47f2608011629f8be8e4fb1258548d7ed32ba860295309.scope - libcontainer container 6f75505265dffdb33c47f2608011629f8be8e4fb1258548d7ed32ba860295309. Feb 13 20:39:34.372413 systemd[1]: Started cri-containerd-ef9ebd426516ad21ef77cbd440a3cfad3134328fbfebc2c6c7f6db4dece0239d.scope - libcontainer container ef9ebd426516ad21ef77cbd440a3cfad3134328fbfebc2c6c7f6db4dece0239d. Feb 13 20:39:34.410916 containerd[1447]: time="2025-02-13T20:39:34.410391266Z" level=info msg="StartContainer for \"2e80db774c639824ec7bce7b1be4663878c63c79f1b1315fee0f317ec9143c5a\" returns successfully" Feb 13 20:39:34.427414 containerd[1447]: time="2025-02-13T20:39:34.427094306Z" level=info msg="StartContainer for \"ef9ebd426516ad21ef77cbd440a3cfad3134328fbfebc2c6c7f6db4dece0239d\" returns successfully" Feb 13 20:39:34.427414 containerd[1447]: time="2025-02-13T20:39:34.427143426Z" level=info msg="StartContainer for \"6f75505265dffdb33c47f2608011629f8be8e4fb1258548d7ed32ba860295309\" returns successfully" Feb 13 20:39:34.450954 kubelet[2151]: I0213 20:39:34.450909 2151 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:39:34.451371 kubelet[2151]: E0213 20:39:34.451266 2151 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Feb 13 20:39:34.483815 kubelet[2151]: W0213 20:39:34.483745 2151 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:39:34.483815 kubelet[2151]: E0213 20:39:34.483816 2151 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Feb 13 20:39:34.964559 kubelet[2151]: E0213 20:39:34.964455 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:34.967768 kubelet[2151]: E0213 20:39:34.967633 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:34.971314 kubelet[2151]: E0213 20:39:34.971174 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:35.971825 kubelet[2151]: E0213 20:39:35.971774 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:35.973954 kubelet[2151]: E0213 20:39:35.973138 2151 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:36.045334 kubelet[2151]: E0213 20:39:36.045282 2151 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 20:39:36.053267 kubelet[2151]: I0213 20:39:36.053062 2151 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:39:36.161949 kubelet[2151]: E0213 20:39:36.160401 2151 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823df1775024bca default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:39:32.935470026 +0000 UTC m=+0.823673161,LastTimestamp:2025-02-13 20:39:32.935470026 +0000 UTC m=+0.823673161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:39:36.166314 kubelet[2151]: I0213 20:39:36.166144 2151 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 20:39:36.172773 kubelet[2151]: E0213 20:39:36.172648 2151 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:36.238469 kubelet[2151]: E0213 20:39:36.238189 2151 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823df1775ac6592 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:39:32.946617746 +0000 UTC m=+0.834820881,LastTimestamp:2025-02-13 20:39:32.946617746 +0000 UTC m=+0.834820881,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:39:36.273714 kubelet[2151]: E0213 20:39:36.273658 2151 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:36.374692 kubelet[2151]: E0213 20:39:36.374642 2151 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:36.475481 kubelet[2151]: E0213 20:39:36.475423 2151 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:36.576141 kubelet[2151]: E0213 20:39:36.576034 2151 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:36.676581 kubelet[2151]: E0213 20:39:36.676520 2151 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:36.777143 kubelet[2151]: E0213 20:39:36.777100 2151 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:36.877995 kubelet[2151]: E0213 20:39:36.877943 2151 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:36.979050 kubelet[2151]: E0213 20:39:36.978945 2151 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:37.079710 kubelet[2151]: E0213 20:39:37.079647 2151 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:37.180672 kubelet[2151]: E0213 20:39:37.180568 2151 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:37.281233 kubelet[2151]: E0213 20:39:37.281186 2151 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:37.935650 kubelet[2151]: I0213 20:39:37.935603 2151 apiserver.go:52] "Watching apiserver" Feb 13 20:39:37.941116 kubelet[2151]: I0213 20:39:37.941086 2151 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:39:38.055457 systemd[1]: Reloading requested from client PID 2427 ('systemctl') (unit session-5.scope)... Feb 13 20:39:38.055471 systemd[1]: Reloading... Feb 13 20:39:38.120972 zram_generator::config[2469]: No configuration found. Feb 13 20:39:38.200549 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:39:38.264677 systemd[1]: Reloading finished in 208 ms. Feb 13 20:39:38.308369 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:38.318757 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:39:38.319020 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:38.319074 systemd[1]: kubelet.service: Consumed 1.210s CPU time, 116.5M memory peak, 0B memory swap peak. Feb 13 20:39:38.326236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:39:38.415978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:39:38.419868 (kubelet)[2508]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:39:38.464554 kubelet[2508]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:39:38.464554 kubelet[2508]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:39:38.464554 kubelet[2508]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:39:38.464554 kubelet[2508]: I0213 20:39:38.463062 2508 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:39:38.469810 kubelet[2508]: I0213 20:39:38.469778 2508 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:39:38.469810 kubelet[2508]: I0213 20:39:38.469805 2508 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:39:38.470014 kubelet[2508]: I0213 20:39:38.469995 2508 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:39:38.471317 kubelet[2508]: I0213 20:39:38.471299 2508 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:39:38.472428 kubelet[2508]: I0213 20:39:38.472408 2508 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:39:38.478446 kubelet[2508]: I0213 20:39:38.478425 2508 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:39:38.478621 kubelet[2508]: I0213 20:39:38.478600 2508 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:39:38.478772 kubelet[2508]: I0213 20:39:38.478623 2508 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:39:38.478842 kubelet[2508]: I0213 20:39:38.478778 2508 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:39:38.478842 kubelet[2508]: I0213 20:39:38.478788 2508 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:39:38.478842 kubelet[2508]: I0213 20:39:38.478817 2508 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:39:38.478938 kubelet[2508]: I0213 20:39:38.478924 2508 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:39:38.478971 kubelet[2508]: I0213 20:39:38.478955 2508 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:39:38.478991 kubelet[2508]: I0213 20:39:38.478981 2508 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:39:38.479015 kubelet[2508]: I0213 20:39:38.478997 2508 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:39:38.479767 kubelet[2508]: I0213 20:39:38.479746 2508 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:39:38.482938 kubelet[2508]: I0213 20:39:38.479909 2508 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:39:38.482938 kubelet[2508]: I0213 20:39:38.480313 2508 server.go:1264] "Started kubelet" Feb 13 20:39:38.482938 kubelet[2508]: I0213 20:39:38.480661 2508 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:39:38.482938 kubelet[2508]: I0213 20:39:38.481023 2508 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:39:38.482938 kubelet[2508]: I0213 20:39:38.481060 2508 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:39:38.482938 kubelet[2508]: I0213 20:39:38.481278 2508 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:39:38.482938 kubelet[2508]: I0213 20:39:38.481922 2508 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:39:38.483863 kubelet[2508]: E0213 20:39:38.483844 2508 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:39:38.483909 kubelet[2508]: I0213 20:39:38.483878 2508 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:39:38.483993 kubelet[2508]: I0213 20:39:38.483977 2508 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:39:38.484128 kubelet[2508]: I0213 20:39:38.484114 2508 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:39:38.487187 kubelet[2508]: E0213 20:39:38.487158 2508 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:39:38.487672 kubelet[2508]: I0213 20:39:38.487642 2508 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:39:38.487941 kubelet[2508]: I0213 20:39:38.487743 2508 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:39:38.489555 kubelet[2508]: I0213 20:39:38.489533 2508 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:39:38.502127 kubelet[2508]: I0213 20:39:38.501802 2508 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:39:38.511771 kubelet[2508]: I0213 20:39:38.511457 2508 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:39:38.511771 kubelet[2508]: I0213 20:39:38.511502 2508 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:39:38.511771 kubelet[2508]: I0213 20:39:38.511518 2508 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:39:38.511771 kubelet[2508]: E0213 20:39:38.511560 2508 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:39:38.538632 kubelet[2508]: I0213 20:39:38.538591 2508 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:39:38.538632 kubelet[2508]: I0213 20:39:38.538610 2508 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:39:38.538632 kubelet[2508]: I0213 20:39:38.538629 2508 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:39:38.538785 kubelet[2508]: I0213 20:39:38.538772 2508 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:39:38.538809 kubelet[2508]: I0213 20:39:38.538783 2508 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:39:38.538809 kubelet[2508]: I0213 20:39:38.538801 2508 policy_none.go:49] "None policy: Start" Feb 13 20:39:38.539354 kubelet[2508]: I0213 20:39:38.539337 2508 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:39:38.539415 kubelet[2508]: I0213 20:39:38.539360 2508 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:39:38.539491 kubelet[2508]: I0213 20:39:38.539476 2508 state_mem.go:75] "Updated machine memory state" Feb 13 20:39:38.545096 kubelet[2508]: I0213 20:39:38.545064 2508 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:39:38.545541 kubelet[2508]: I0213 20:39:38.545259 2508 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:39:38.545541 kubelet[2508]: I0213 20:39:38.545362 2508 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:39:38.588123 kubelet[2508]: I0213 20:39:38.588086 2508 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:39:38.594245 kubelet[2508]: I0213 20:39:38.594167 2508 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 20:39:38.594952 kubelet[2508]: I0213 20:39:38.594790 2508 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 20:39:38.612156 kubelet[2508]: I0213 20:39:38.612120 2508 topology_manager.go:215] "Topology Admit Handler" podUID="2e7e4b5f5930bf1bb943a9e09aa7d492" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 20:39:38.612522 kubelet[2508]: I0213 20:39:38.612384 2508 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 20:39:38.612522 kubelet[2508]: I0213 20:39:38.612442 2508 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 20:39:38.786067 kubelet[2508]: I0213 20:39:38.785657 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e7e4b5f5930bf1bb943a9e09aa7d492-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e7e4b5f5930bf1bb943a9e09aa7d492\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:38.786067 kubelet[2508]: I0213 20:39:38.785707 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e7e4b5f5930bf1bb943a9e09aa7d492-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2e7e4b5f5930bf1bb943a9e09aa7d492\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:38.786067 kubelet[2508]: I0213 20:39:38.785731 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:38.786067 kubelet[2508]: I0213 20:39:38.785749 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:38.786067 kubelet[2508]: I0213 20:39:38.785767 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:38.786320 kubelet[2508]: I0213 20:39:38.785783 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e7e4b5f5930bf1bb943a9e09aa7d492-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e7e4b5f5930bf1bb943a9e09aa7d492\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:38.786320 kubelet[2508]: I0213 20:39:38.785798 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:38.786320 kubelet[2508]: I0213 20:39:38.785816 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:39:38.786320 kubelet[2508]: I0213 20:39:38.785835 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:39:38.949626 kubelet[2508]: E0213 20:39:38.949522 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:38.950039 kubelet[2508]: E0213 20:39:38.949863 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:38.950039 kubelet[2508]: E0213 20:39:38.949866 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:39.480440 kubelet[2508]: I0213 20:39:39.480129 2508 apiserver.go:52] "Watching apiserver" Feb 13 20:39:39.484962 kubelet[2508]: I0213 20:39:39.484584 2508 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:39:39.526980 kubelet[2508]: E0213 20:39:39.526697 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:39.527417 kubelet[2508]: E0213 20:39:39.527352 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:39.534866 kubelet[2508]: E0213 20:39:39.534825 2508 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 20:39:39.535951 kubelet[2508]: E0213 20:39:39.535313 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:39.553002 kubelet[2508]: I0213 20:39:39.552816 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5527852119999999 podStartE2EDuration="1.552785212s" podCreationTimestamp="2025-02-13 20:39:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:39:39.546013399 +0000 UTC m=+1.122971710" watchObservedRunningTime="2025-02-13 20:39:39.552785212 +0000 UTC m=+1.129743523" Feb 13 20:39:39.563865 kubelet[2508]: I0213 20:39:39.563813 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.563800964 podStartE2EDuration="1.563800964s" podCreationTimestamp="2025-02-13 20:39:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:39:39.553303939 +0000 UTC m=+1.130262331" watchObservedRunningTime="2025-02-13 20:39:39.563800964 +0000 UTC m=+1.140759275" Feb 13 20:39:39.564160 kubelet[2508]: I0213 20:39:39.564071 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.564063688 podStartE2EDuration="1.564063688s" podCreationTimestamp="2025-02-13 20:39:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:39:39.563375518 +0000 UTC m=+1.140333829" watchObservedRunningTime="2025-02-13 20:39:39.564063688 +0000 UTC m=+1.141021999" Feb 13 20:39:39.890754 sudo[1578]: pam_unix(sudo:session): session closed for user root Feb 13 20:39:39.892463 sshd[1575]: pam_unix(sshd:session): session closed for user core Feb 13 20:39:39.896444 systemd[1]: sshd@4-10.0.0.7:22-10.0.0.1:38702.service: Deactivated successfully. Feb 13 20:39:39.897922 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:39:39.898173 systemd[1]: session-5.scope: Consumed 6.583s CPU time, 193.0M memory peak, 0B memory swap peak. Feb 13 20:39:39.899189 systemd-logind[1420]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:39:39.900547 systemd-logind[1420]: Removed session 5. Feb 13 20:39:40.527697 kubelet[2508]: E0213 20:39:40.527360 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:41.826520 kubelet[2508]: E0213 20:39:41.826411 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:44.876675 kubelet[2508]: E0213 20:39:44.876636 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:45.534996 kubelet[2508]: E0213 20:39:45.534951 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:46.750209 kubelet[2508]: E0213 20:39:46.750112 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:47.538603 kubelet[2508]: E0213 20:39:47.538563 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:48.677359 update_engine[1429]: I20250213 20:39:48.677269 1429 update_attempter.cc:509] Updating boot flags... Feb 13 20:39:48.694969 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2581) Feb 13 20:39:48.732082 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2582) Feb 13 20:39:48.760030 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2582) Feb 13 20:39:51.833400 kubelet[2508]: E0213 20:39:51.833307 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:53.321710 kubelet[2508]: I0213 20:39:53.321656 2508 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:39:53.322632 containerd[1447]: time="2025-02-13T20:39:53.322596671Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:39:53.322880 kubelet[2508]: I0213 20:39:53.322789 2508 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:39:54.180306 kubelet[2508]: I0213 20:39:54.180255 2508 topology_manager.go:215] "Topology Admit Handler" podUID="a62b8180-80e4-46ae-9e26-7b72852445bd" podNamespace="kube-system" podName="kube-proxy-rgzlc" Feb 13 20:39:54.187079 kubelet[2508]: I0213 20:39:54.187014 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a62b8180-80e4-46ae-9e26-7b72852445bd-xtables-lock\") pod \"kube-proxy-rgzlc\" (UID: \"a62b8180-80e4-46ae-9e26-7b72852445bd\") " pod="kube-system/kube-proxy-rgzlc" Feb 13 20:39:54.187079 kubelet[2508]: I0213 20:39:54.187062 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9dr5\" (UniqueName: \"kubernetes.io/projected/a62b8180-80e4-46ae-9e26-7b72852445bd-kube-api-access-q9dr5\") pod \"kube-proxy-rgzlc\" (UID: \"a62b8180-80e4-46ae-9e26-7b72852445bd\") " pod="kube-system/kube-proxy-rgzlc" Feb 13 20:39:54.187079 kubelet[2508]: I0213 20:39:54.187089 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a62b8180-80e4-46ae-9e26-7b72852445bd-kube-proxy\") pod \"kube-proxy-rgzlc\" (UID: \"a62b8180-80e4-46ae-9e26-7b72852445bd\") " pod="kube-system/kube-proxy-rgzlc" Feb 13 20:39:54.187310 kubelet[2508]: I0213 20:39:54.187110 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a62b8180-80e4-46ae-9e26-7b72852445bd-lib-modules\") pod \"kube-proxy-rgzlc\" (UID: \"a62b8180-80e4-46ae-9e26-7b72852445bd\") " pod="kube-system/kube-proxy-rgzlc" Feb 13 20:39:54.191969 kubelet[2508]: I0213 20:39:54.190368 2508 topology_manager.go:215] "Topology Admit Handler" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" podNamespace="kube-flannel" podName="kube-flannel-ds-q7x5h" Feb 13 20:39:54.195827 systemd[1]: Created slice kubepods-besteffort-poda62b8180_80e4_46ae_9e26_7b72852445bd.slice - libcontainer container kubepods-besteffort-poda62b8180_80e4_46ae_9e26_7b72852445bd.slice. Feb 13 20:39:54.211585 systemd[1]: Created slice kubepods-burstable-podfba936b1_e9bf_4b9d_8ced_8a880a98539b.slice - libcontainer container kubepods-burstable-podfba936b1_e9bf_4b9d_8ced_8a880a98539b.slice. Feb 13 20:39:54.288239 kubelet[2508]: I0213 20:39:54.288111 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/fba936b1-e9bf-4b9d-8ced-8a880a98539b-flannel-cfg\") pod \"kube-flannel-ds-q7x5h\" (UID: \"fba936b1-e9bf-4b9d-8ced-8a880a98539b\") " pod="kube-flannel/kube-flannel-ds-q7x5h" Feb 13 20:39:54.288239 kubelet[2508]: I0213 20:39:54.288153 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64nxr\" (UniqueName: \"kubernetes.io/projected/fba936b1-e9bf-4b9d-8ced-8a880a98539b-kube-api-access-64nxr\") pod \"kube-flannel-ds-q7x5h\" (UID: \"fba936b1-e9bf-4b9d-8ced-8a880a98539b\") " pod="kube-flannel/kube-flannel-ds-q7x5h" Feb 13 20:39:54.288239 kubelet[2508]: I0213 20:39:54.288174 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/fba936b1-e9bf-4b9d-8ced-8a880a98539b-cni\") pod \"kube-flannel-ds-q7x5h\" (UID: \"fba936b1-e9bf-4b9d-8ced-8a880a98539b\") " pod="kube-flannel/kube-flannel-ds-q7x5h" Feb 13 20:39:54.288239 kubelet[2508]: I0213 20:39:54.288190 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fba936b1-e9bf-4b9d-8ced-8a880a98539b-run\") pod \"kube-flannel-ds-q7x5h\" (UID: \"fba936b1-e9bf-4b9d-8ced-8a880a98539b\") " pod="kube-flannel/kube-flannel-ds-q7x5h" Feb 13 20:39:54.288239 kubelet[2508]: I0213 20:39:54.288216 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/fba936b1-e9bf-4b9d-8ced-8a880a98539b-cni-plugin\") pod \"kube-flannel-ds-q7x5h\" (UID: \"fba936b1-e9bf-4b9d-8ced-8a880a98539b\") " pod="kube-flannel/kube-flannel-ds-q7x5h" Feb 13 20:39:54.288884 kubelet[2508]: I0213 20:39:54.288324 2508 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fba936b1-e9bf-4b9d-8ced-8a880a98539b-xtables-lock\") pod \"kube-flannel-ds-q7x5h\" (UID: \"fba936b1-e9bf-4b9d-8ced-8a880a98539b\") " pod="kube-flannel/kube-flannel-ds-q7x5h" Feb 13 20:39:54.506038 kubelet[2508]: E0213 20:39:54.505910 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:54.506755 containerd[1447]: time="2025-02-13T20:39:54.506584714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rgzlc,Uid:a62b8180-80e4-46ae-9e26-7b72852445bd,Namespace:kube-system,Attempt:0,}" Feb 13 20:39:54.513617 kubelet[2508]: E0213 20:39:54.513586 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:54.514489 containerd[1447]: time="2025-02-13T20:39:54.514445555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-q7x5h,Uid:fba936b1-e9bf-4b9d-8ced-8a880a98539b,Namespace:kube-flannel,Attempt:0,}" Feb 13 20:39:54.528699 containerd[1447]: time="2025-02-13T20:39:54.528591269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:39:54.528699 containerd[1447]: time="2025-02-13T20:39:54.528657870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:39:54.528699 containerd[1447]: time="2025-02-13T20:39:54.528673870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:54.529828 containerd[1447]: time="2025-02-13T20:39:54.529033512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:54.537777 containerd[1447]: time="2025-02-13T20:39:54.537587956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:39:54.537777 containerd[1447]: time="2025-02-13T20:39:54.537643597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:39:54.537777 containerd[1447]: time="2025-02-13T20:39:54.537666637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:54.537777 containerd[1447]: time="2025-02-13T20:39:54.537741677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:39:54.555134 systemd[1]: Started cri-containerd-b4a4e246f17bd55e73b47499c826f929ac2c8ac370e5d0d3c524bd4784a1e0fe.scope - libcontainer container b4a4e246f17bd55e73b47499c826f929ac2c8ac370e5d0d3c524bd4784a1e0fe. Feb 13 20:39:54.558284 systemd[1]: Started cri-containerd-b6c5a189d4c990f3f5a3ea91eaf0456ab5585744a5eab576219f579cfa11cc94.scope - libcontainer container b6c5a189d4c990f3f5a3ea91eaf0456ab5585744a5eab576219f579cfa11cc94. Feb 13 20:39:54.581032 containerd[1447]: time="2025-02-13T20:39:54.580894743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rgzlc,Uid:a62b8180-80e4-46ae-9e26-7b72852445bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4a4e246f17bd55e73b47499c826f929ac2c8ac370e5d0d3c524bd4784a1e0fe\"" Feb 13 20:39:54.583956 kubelet[2508]: E0213 20:39:54.583687 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:54.587472 containerd[1447]: time="2025-02-13T20:39:54.587428658Z" level=info msg="CreateContainer within sandbox \"b4a4e246f17bd55e73b47499c826f929ac2c8ac370e5d0d3c524bd4784a1e0fe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:39:54.593784 containerd[1447]: time="2025-02-13T20:39:54.593485369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-q7x5h,Uid:fba936b1-e9bf-4b9d-8ced-8a880a98539b,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"b6c5a189d4c990f3f5a3ea91eaf0456ab5585744a5eab576219f579cfa11cc94\"" Feb 13 20:39:54.594407 kubelet[2508]: E0213 20:39:54.594381 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:54.596236 containerd[1447]: time="2025-02-13T20:39:54.596197584Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:39:54.609636 containerd[1447]: time="2025-02-13T20:39:54.609590294Z" level=info msg="CreateContainer within sandbox \"b4a4e246f17bd55e73b47499c826f929ac2c8ac370e5d0d3c524bd4784a1e0fe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d37f5829caa1dd700e1bb6f15296d860b6e9814e05b768e2ce1024fcc4bf4104\"" Feb 13 20:39:54.610435 containerd[1447]: time="2025-02-13T20:39:54.610411298Z" level=info msg="StartContainer for \"d37f5829caa1dd700e1bb6f15296d860b6e9814e05b768e2ce1024fcc4bf4104\"" Feb 13 20:39:54.638132 systemd[1]: Started cri-containerd-d37f5829caa1dd700e1bb6f15296d860b6e9814e05b768e2ce1024fcc4bf4104.scope - libcontainer container d37f5829caa1dd700e1bb6f15296d860b6e9814e05b768e2ce1024fcc4bf4104. Feb 13 20:39:54.661096 containerd[1447]: time="2025-02-13T20:39:54.661035083Z" level=info msg="StartContainer for \"d37f5829caa1dd700e1bb6f15296d860b6e9814e05b768e2ce1024fcc4bf4104\" returns successfully" Feb 13 20:39:55.555543 kubelet[2508]: E0213 20:39:55.555511 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:55.565982 kubelet[2508]: I0213 20:39:55.564511 2508 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rgzlc" podStartSLOduration=1.564484394 podStartE2EDuration="1.564484394s" podCreationTimestamp="2025-02-13 20:39:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:39:55.564484354 +0000 UTC m=+17.141442665" watchObservedRunningTime="2025-02-13 20:39:55.564484394 +0000 UTC m=+17.141442705" Feb 13 20:39:55.718683 containerd[1447]: time="2025-02-13T20:39:55.718567352Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:39:55.718683 containerd[1447]: time="2025-02-13T20:39:55.718595832Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:39:55.719110 kubelet[2508]: E0213 20:39:55.718817 2508 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:39:55.719110 kubelet[2508]: E0213 20:39:55.718882 2508 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:39:55.719211 kubelet[2508]: E0213 20:39:55.719095 2508 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64nxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-q7x5h_kube-flannel(fba936b1-e9bf-4b9d-8ced-8a880a98539b): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:39:55.719329 kubelet[2508]: E0213 20:39:55.719126 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:39:56.559173 kubelet[2508]: E0213 20:39:56.559060 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:39:56.560716 kubelet[2508]: E0213 20:39:56.560560 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:40:04.812321 systemd[1]: Started sshd@5-10.0.0.7:22-10.0.0.1:34680.service - OpenSSH per-connection server daemon (10.0.0.1:34680). Feb 13 20:40:04.846742 sshd[2832]: Accepted publickey for core from 10.0.0.1 port 34680 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:04.848230 sshd[2832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:04.851757 systemd-logind[1420]: New session 6 of user core. Feb 13 20:40:04.859127 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:40:04.973984 sshd[2832]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:04.977328 systemd[1]: sshd@5-10.0.0.7:22-10.0.0.1:34680.service: Deactivated successfully. Feb 13 20:40:04.978818 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:40:04.980630 systemd-logind[1420]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:40:04.981829 systemd-logind[1420]: Removed session 6. Feb 13 20:40:09.984588 systemd[1]: Started sshd@6-10.0.0.7:22-10.0.0.1:34686.service - OpenSSH per-connection server daemon (10.0.0.1:34686). Feb 13 20:40:10.021803 sshd[2848]: Accepted publickey for core from 10.0.0.1 port 34686 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:10.023071 sshd[2848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:10.027018 systemd-logind[1420]: New session 7 of user core. Feb 13 20:40:10.035080 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:40:10.142181 sshd[2848]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:10.145764 systemd[1]: sshd@6-10.0.0.7:22-10.0.0.1:34686.service: Deactivated successfully. Feb 13 20:40:10.147345 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:40:10.147902 systemd-logind[1420]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:40:10.148706 systemd-logind[1420]: Removed session 7. Feb 13 20:40:10.512842 kubelet[2508]: E0213 20:40:10.512794 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:40:10.514469 containerd[1447]: time="2025-02-13T20:40:10.514361991Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:40:11.629704 containerd[1447]: time="2025-02-13T20:40:11.629626919Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:40:11.629704 containerd[1447]: time="2025-02-13T20:40:11.629686640Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:40:11.630139 kubelet[2508]: E0213 20:40:11.629803 2508 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:40:11.630139 kubelet[2508]: E0213 20:40:11.629849 2508 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:40:11.631171 kubelet[2508]: E0213 20:40:11.629927 2508 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64nxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-q7x5h_kube-flannel(fba936b1-e9bf-4b9d-8ced-8a880a98539b): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:40:11.631229 kubelet[2508]: E0213 20:40:11.629966 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:40:15.152278 systemd[1]: Started sshd@7-10.0.0.7:22-10.0.0.1:49304.service - OpenSSH per-connection server daemon (10.0.0.1:49304). Feb 13 20:40:15.186567 sshd[2863]: Accepted publickey for core from 10.0.0.1 port 49304 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:15.187849 sshd[2863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:15.191295 systemd-logind[1420]: New session 8 of user core. Feb 13 20:40:15.208128 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:40:15.315088 sshd[2863]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:15.318230 systemd[1]: sshd@7-10.0.0.7:22-10.0.0.1:49304.service: Deactivated successfully. Feb 13 20:40:15.319779 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:40:15.321388 systemd-logind[1420]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:40:15.322165 systemd-logind[1420]: Removed session 8. Feb 13 20:40:20.325259 systemd[1]: Started sshd@8-10.0.0.7:22-10.0.0.1:49318.service - OpenSSH per-connection server daemon (10.0.0.1:49318). Feb 13 20:40:20.359307 sshd[2878]: Accepted publickey for core from 10.0.0.1 port 49318 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:20.360487 sshd[2878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:20.363799 systemd-logind[1420]: New session 9 of user core. Feb 13 20:40:20.370080 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:40:20.476779 sshd[2878]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:20.479446 systemd[1]: sshd@8-10.0.0.7:22-10.0.0.1:49318.service: Deactivated successfully. Feb 13 20:40:20.481692 systemd-logind[1420]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:40:20.481795 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:40:20.482618 systemd-logind[1420]: Removed session 9. Feb 13 20:40:25.487344 systemd[1]: Started sshd@9-10.0.0.7:22-10.0.0.1:51194.service - OpenSSH per-connection server daemon (10.0.0.1:51194). Feb 13 20:40:25.521665 sshd[2896]: Accepted publickey for core from 10.0.0.1 port 51194 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:25.522804 sshd[2896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:25.526786 systemd-logind[1420]: New session 10 of user core. Feb 13 20:40:25.542065 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:40:25.649148 sshd[2896]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:25.652556 systemd[1]: sshd@9-10.0.0.7:22-10.0.0.1:51194.service: Deactivated successfully. Feb 13 20:40:25.654101 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:40:25.654660 systemd-logind[1420]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:40:25.655547 systemd-logind[1420]: Removed session 10. Feb 13 20:40:26.512093 kubelet[2508]: E0213 20:40:26.512052 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:40:26.512915 kubelet[2508]: E0213 20:40:26.512678 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:40:30.662562 systemd[1]: Started sshd@10-10.0.0.7:22-10.0.0.1:51210.service - OpenSSH per-connection server daemon (10.0.0.1:51210). Feb 13 20:40:30.697629 sshd[2912]: Accepted publickey for core from 10.0.0.1 port 51210 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:30.698778 sshd[2912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:30.702060 systemd-logind[1420]: New session 11 of user core. Feb 13 20:40:30.709065 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:40:30.813143 sshd[2912]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:30.815735 systemd[1]: sshd@10-10.0.0.7:22-10.0.0.1:51210.service: Deactivated successfully. Feb 13 20:40:30.817631 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:40:30.819074 systemd-logind[1420]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:40:30.820243 systemd-logind[1420]: Removed session 11. Feb 13 20:40:35.824647 systemd[1]: Started sshd@11-10.0.0.7:22-10.0.0.1:39490.service - OpenSSH per-connection server daemon (10.0.0.1:39490). Feb 13 20:40:35.858488 sshd[2928]: Accepted publickey for core from 10.0.0.1 port 39490 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:35.859608 sshd[2928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:35.863596 systemd-logind[1420]: New session 12 of user core. Feb 13 20:40:35.872083 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:40:35.974518 sshd[2928]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:35.978439 systemd[1]: sshd@11-10.0.0.7:22-10.0.0.1:39490.service: Deactivated successfully. Feb 13 20:40:35.980450 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:40:35.981244 systemd-logind[1420]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:40:35.982144 systemd-logind[1420]: Removed session 12. Feb 13 20:40:40.984268 systemd[1]: Started sshd@12-10.0.0.7:22-10.0.0.1:39496.service - OpenSSH per-connection server daemon (10.0.0.1:39496). Feb 13 20:40:41.018215 sshd[2945]: Accepted publickey for core from 10.0.0.1 port 39496 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:41.019464 sshd[2945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:41.022942 systemd-logind[1420]: New session 13 of user core. Feb 13 20:40:41.035072 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:40:41.139352 sshd[2945]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:41.143217 systemd[1]: sshd@12-10.0.0.7:22-10.0.0.1:39496.service: Deactivated successfully. Feb 13 20:40:41.145366 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:40:41.145949 systemd-logind[1420]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:40:41.146790 systemd-logind[1420]: Removed session 13. Feb 13 20:40:41.512999 kubelet[2508]: E0213 20:40:41.512533 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:40:41.514928 containerd[1447]: time="2025-02-13T20:40:41.514270227Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:40:42.669938 containerd[1447]: time="2025-02-13T20:40:42.669847943Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:40:42.670281 containerd[1447]: time="2025-02-13T20:40:42.669924944Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:40:42.670311 kubelet[2508]: E0213 20:40:42.670098 2508 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:40:42.670311 kubelet[2508]: E0213 20:40:42.670175 2508 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:40:42.670553 kubelet[2508]: E0213 20:40:42.670285 2508 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64nxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-q7x5h_kube-flannel(fba936b1-e9bf-4b9d-8ced-8a880a98539b): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:40:42.670606 kubelet[2508]: E0213 20:40:42.670317 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:40:46.150345 systemd[1]: Started sshd@13-10.0.0.7:22-10.0.0.1:55174.service - OpenSSH per-connection server daemon (10.0.0.1:55174). Feb 13 20:40:46.184170 sshd[2962]: Accepted publickey for core from 10.0.0.1 port 55174 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:46.185409 sshd[2962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:46.189165 systemd-logind[1420]: New session 14 of user core. Feb 13 20:40:46.209102 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:40:46.319707 sshd[2962]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:46.323160 systemd[1]: sshd@13-10.0.0.7:22-10.0.0.1:55174.service: Deactivated successfully. Feb 13 20:40:46.325372 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:40:46.325925 systemd-logind[1420]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:40:46.327050 systemd-logind[1420]: Removed session 14. Feb 13 20:40:51.330323 systemd[1]: Started sshd@14-10.0.0.7:22-10.0.0.1:55182.service - OpenSSH per-connection server daemon (10.0.0.1:55182). Feb 13 20:40:51.365504 sshd[2978]: Accepted publickey for core from 10.0.0.1 port 55182 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:51.366715 sshd[2978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:51.370251 systemd-logind[1420]: New session 15 of user core. Feb 13 20:40:51.377158 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:40:51.492823 sshd[2978]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:51.499028 systemd[1]: sshd@14-10.0.0.7:22-10.0.0.1:55182.service: Deactivated successfully. Feb 13 20:40:51.502783 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:40:51.504099 systemd-logind[1420]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:40:51.505050 systemd-logind[1420]: Removed session 15. Feb 13 20:40:55.512182 kubelet[2508]: E0213 20:40:55.512151 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:40:56.503545 systemd[1]: Started sshd@15-10.0.0.7:22-10.0.0.1:45328.service - OpenSSH per-connection server daemon (10.0.0.1:45328). Feb 13 20:40:56.513362 kubelet[2508]: E0213 20:40:56.513304 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:40:56.513874 kubelet[2508]: E0213 20:40:56.513848 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:40:56.542421 sshd[2996]: Accepted publickey for core from 10.0.0.1 port 45328 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:40:56.543710 sshd[2996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:40:56.547389 systemd-logind[1420]: New session 16 of user core. Feb 13 20:40:56.554077 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:40:56.658454 sshd[2996]: pam_unix(sshd:session): session closed for user core Feb 13 20:40:56.662297 systemd[1]: sshd@15-10.0.0.7:22-10.0.0.1:45328.service: Deactivated successfully. Feb 13 20:40:56.663804 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:40:56.664384 systemd-logind[1420]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:40:56.666226 systemd-logind[1420]: Removed session 16. Feb 13 20:41:01.669258 systemd[1]: Started sshd@16-10.0.0.7:22-10.0.0.1:45336.service - OpenSSH per-connection server daemon (10.0.0.1:45336). Feb 13 20:41:01.703417 sshd[3012]: Accepted publickey for core from 10.0.0.1 port 45336 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:01.704624 sshd[3012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:01.708213 systemd-logind[1420]: New session 17 of user core. Feb 13 20:41:01.721145 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:41:01.826823 sshd[3012]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:01.830731 systemd[1]: sshd@16-10.0.0.7:22-10.0.0.1:45336.service: Deactivated successfully. Feb 13 20:41:01.832292 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:41:01.833553 systemd-logind[1420]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:41:01.834468 systemd-logind[1420]: Removed session 17. Feb 13 20:41:03.512848 kubelet[2508]: E0213 20:41:03.512799 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:06.837518 systemd[1]: Started sshd@17-10.0.0.7:22-10.0.0.1:57392.service - OpenSSH per-connection server daemon (10.0.0.1:57392). Feb 13 20:41:06.871918 sshd[3027]: Accepted publickey for core from 10.0.0.1 port 57392 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:06.873155 sshd[3027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:06.876411 systemd-logind[1420]: New session 18 of user core. Feb 13 20:41:06.885148 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:41:06.991813 sshd[3027]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:06.995028 systemd[1]: sshd@17-10.0.0.7:22-10.0.0.1:57392.service: Deactivated successfully. Feb 13 20:41:06.997356 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:41:06.997901 systemd-logind[1420]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:41:06.998693 systemd-logind[1420]: Removed session 18. Feb 13 20:41:07.512114 kubelet[2508]: E0213 20:41:07.512077 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:07.513657 kubelet[2508]: E0213 20:41:07.513569 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:41:12.002346 systemd[1]: Started sshd@18-10.0.0.7:22-10.0.0.1:57402.service - OpenSSH per-connection server daemon (10.0.0.1:57402). Feb 13 20:41:12.036543 sshd[3042]: Accepted publickey for core from 10.0.0.1 port 57402 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:12.037771 sshd[3042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:12.041315 systemd-logind[1420]: New session 19 of user core. Feb 13 20:41:12.051079 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:41:12.154362 sshd[3042]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:12.157417 systemd[1]: sshd@18-10.0.0.7:22-10.0.0.1:57402.service: Deactivated successfully. Feb 13 20:41:12.158895 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:41:12.159436 systemd-logind[1420]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:41:12.160188 systemd-logind[1420]: Removed session 19. Feb 13 20:41:12.512500 kubelet[2508]: E0213 20:41:12.512470 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:12.512500 kubelet[2508]: E0213 20:41:12.512606 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:17.169182 systemd[1]: Started sshd@19-10.0.0.7:22-10.0.0.1:37164.service - OpenSSH per-connection server daemon (10.0.0.1:37164). Feb 13 20:41:17.210561 sshd[3058]: Accepted publickey for core from 10.0.0.1 port 37164 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:17.211831 sshd[3058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:17.215437 systemd-logind[1420]: New session 20 of user core. Feb 13 20:41:17.228076 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:41:17.331164 sshd[3058]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:17.333433 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:41:17.335076 systemd[1]: sshd@19-10.0.0.7:22-10.0.0.1:37164.service: Deactivated successfully. Feb 13 20:41:17.335355 systemd-logind[1420]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:41:17.336989 systemd-logind[1420]: Removed session 20. Feb 13 20:41:21.512406 kubelet[2508]: E0213 20:41:21.512341 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:21.513319 kubelet[2508]: E0213 20:41:21.513029 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:41:22.341591 systemd[1]: Started sshd@20-10.0.0.7:22-10.0.0.1:37178.service - OpenSSH per-connection server daemon (10.0.0.1:37178). Feb 13 20:41:22.375884 sshd[3073]: Accepted publickey for core from 10.0.0.1 port 37178 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:22.377091 sshd[3073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:22.380441 systemd-logind[1420]: New session 21 of user core. Feb 13 20:41:22.395072 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:41:22.502337 sshd[3073]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:22.505567 systemd[1]: sshd@20-10.0.0.7:22-10.0.0.1:37178.service: Deactivated successfully. Feb 13 20:41:22.508194 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:41:22.508808 systemd-logind[1420]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:41:22.509586 systemd-logind[1420]: Removed session 21. Feb 13 20:41:27.513368 systemd[1]: Started sshd@21-10.0.0.7:22-10.0.0.1:56902.service - OpenSSH per-connection server daemon (10.0.0.1:56902). Feb 13 20:41:27.549896 sshd[3090]: Accepted publickey for core from 10.0.0.1 port 56902 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:27.551209 sshd[3090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:27.554814 systemd-logind[1420]: New session 22 of user core. Feb 13 20:41:27.563084 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:41:27.671730 sshd[3090]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:27.674875 systemd[1]: sshd@21-10.0.0.7:22-10.0.0.1:56902.service: Deactivated successfully. Feb 13 20:41:27.676501 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:41:27.678396 systemd-logind[1420]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:41:27.679342 systemd-logind[1420]: Removed session 22. Feb 13 20:41:32.689464 systemd[1]: Started sshd@22-10.0.0.7:22-10.0.0.1:39598.service - OpenSSH per-connection server daemon (10.0.0.1:39598). Feb 13 20:41:32.724694 sshd[3106]: Accepted publickey for core from 10.0.0.1 port 39598 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:32.725869 sshd[3106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:32.729595 systemd-logind[1420]: New session 23 of user core. Feb 13 20:41:32.739080 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:41:32.848268 sshd[3106]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:32.851376 systemd[1]: sshd@22-10.0.0.7:22-10.0.0.1:39598.service: Deactivated successfully. Feb 13 20:41:32.853809 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:41:32.854515 systemd-logind[1420]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:41:32.855615 systemd-logind[1420]: Removed session 23. Feb 13 20:41:34.512820 kubelet[2508]: E0213 20:41:34.512786 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:34.513888 containerd[1447]: time="2025-02-13T20:41:34.513854434Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:41:35.613153 containerd[1447]: time="2025-02-13T20:41:35.613103321Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:41:35.613794 containerd[1447]: time="2025-02-13T20:41:35.613197081Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:41:35.613840 kubelet[2508]: E0213 20:41:35.613287 2508 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:41:35.613840 kubelet[2508]: E0213 20:41:35.613339 2508 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:41:35.614131 kubelet[2508]: E0213 20:41:35.613424 2508 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64nxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-q7x5h_kube-flannel(fba936b1-e9bf-4b9d-8ced-8a880a98539b): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:41:35.614185 kubelet[2508]: E0213 20:41:35.613454 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:41:37.858376 systemd[1]: Started sshd@23-10.0.0.7:22-10.0.0.1:39610.service - OpenSSH per-connection server daemon (10.0.0.1:39610). Feb 13 20:41:37.892290 sshd[3121]: Accepted publickey for core from 10.0.0.1 port 39610 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:37.893447 sshd[3121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:37.896759 systemd-logind[1420]: New session 24 of user core. Feb 13 20:41:37.907076 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:41:38.012728 sshd[3121]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:38.015265 systemd[1]: sshd@23-10.0.0.7:22-10.0.0.1:39610.service: Deactivated successfully. Feb 13 20:41:38.016745 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:41:38.018593 systemd-logind[1420]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:41:38.019380 systemd-logind[1420]: Removed session 24. Feb 13 20:41:38.526077 kubelet[2508]: E0213 20:41:38.526042 2508 kubelet_node_status.go:456] "Node not becoming ready in time after startup" Feb 13 20:41:38.566315 kubelet[2508]: E0213 20:41:38.566271 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:41:43.024407 systemd[1]: Started sshd@24-10.0.0.7:22-10.0.0.1:38650.service - OpenSSH per-connection server daemon (10.0.0.1:38650). Feb 13 20:41:43.058472 sshd[3138]: Accepted publickey for core from 10.0.0.1 port 38650 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:43.059574 sshd[3138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:43.062950 systemd-logind[1420]: New session 25 of user core. Feb 13 20:41:43.071063 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:41:43.175406 sshd[3138]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:43.178539 systemd[1]: sshd@24-10.0.0.7:22-10.0.0.1:38650.service: Deactivated successfully. Feb 13 20:41:43.180219 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:41:43.180974 systemd-logind[1420]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:41:43.181726 systemd-logind[1420]: Removed session 25. Feb 13 20:41:43.567861 kubelet[2508]: E0213 20:41:43.567810 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:41:48.193270 systemd[1]: Started sshd@25-10.0.0.7:22-10.0.0.1:38654.service - OpenSSH per-connection server daemon (10.0.0.1:38654). Feb 13 20:41:48.227233 sshd[3154]: Accepted publickey for core from 10.0.0.1 port 38654 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:48.228362 sshd[3154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:48.232019 systemd-logind[1420]: New session 26 of user core. Feb 13 20:41:48.241156 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:41:48.347636 sshd[3154]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:48.350699 systemd[1]: sshd@25-10.0.0.7:22-10.0.0.1:38654.service: Deactivated successfully. Feb 13 20:41:48.352970 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:41:48.353746 systemd-logind[1420]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:41:48.354610 systemd-logind[1420]: Removed session 26. Feb 13 20:41:48.569141 kubelet[2508]: E0213 20:41:48.568632 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:41:50.512781 kubelet[2508]: E0213 20:41:50.512708 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:41:50.514254 kubelet[2508]: E0213 20:41:50.514187 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:41:53.358349 systemd[1]: Started sshd@26-10.0.0.7:22-10.0.0.1:43036.service - OpenSSH per-connection server daemon (10.0.0.1:43036). Feb 13 20:41:53.392725 sshd[3170]: Accepted publickey for core from 10.0.0.1 port 43036 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:53.393880 sshd[3170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:53.397135 systemd-logind[1420]: New session 27 of user core. Feb 13 20:41:53.406069 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:41:53.510966 sshd[3170]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:53.514315 systemd[1]: sshd@26-10.0.0.7:22-10.0.0.1:43036.service: Deactivated successfully. Feb 13 20:41:53.516523 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:41:53.517145 systemd-logind[1420]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:41:53.518140 systemd-logind[1420]: Removed session 27. Feb 13 20:41:53.569821 kubelet[2508]: E0213 20:41:53.569787 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:41:58.524290 systemd[1]: Started sshd@27-10.0.0.7:22-10.0.0.1:43042.service - OpenSSH per-connection server daemon (10.0.0.1:43042). Feb 13 20:41:58.558349 sshd[3188]: Accepted publickey for core from 10.0.0.1 port 43042 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:41:58.559495 sshd[3188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:41:58.563045 systemd-logind[1420]: New session 28 of user core. Feb 13 20:41:58.570723 kubelet[2508]: E0213 20:41:58.570678 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:41:58.571070 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:41:58.675859 sshd[3188]: pam_unix(sshd:session): session closed for user core Feb 13 20:41:58.678487 systemd-logind[1420]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:41:58.678735 systemd[1]: sshd@27-10.0.0.7:22-10.0.0.1:43042.service: Deactivated successfully. Feb 13 20:41:58.680342 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:41:58.682028 systemd-logind[1420]: Removed session 28. Feb 13 20:42:02.512983 kubelet[2508]: E0213 20:42:02.512751 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:02.513652 kubelet[2508]: E0213 20:42:02.513454 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:42:03.572288 kubelet[2508]: E0213 20:42:03.572248 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:03.690465 systemd[1]: Started sshd@28-10.0.0.7:22-10.0.0.1:46424.service - OpenSSH per-connection server daemon (10.0.0.1:46424). Feb 13 20:42:03.724518 sshd[3204]: Accepted publickey for core from 10.0.0.1 port 46424 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:03.725739 sshd[3204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:03.729565 systemd-logind[1420]: New session 29 of user core. Feb 13 20:42:03.741075 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:42:03.846499 sshd[3204]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:03.849800 systemd[1]: sshd@28-10.0.0.7:22-10.0.0.1:46424.service: Deactivated successfully. Feb 13 20:42:03.852511 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:42:03.853394 systemd-logind[1420]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:42:03.854313 systemd-logind[1420]: Removed session 29. Feb 13 20:42:08.573622 kubelet[2508]: E0213 20:42:08.573574 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:08.858256 systemd[1]: Started sshd@29-10.0.0.7:22-10.0.0.1:46438.service - OpenSSH per-connection server daemon (10.0.0.1:46438). Feb 13 20:42:08.893044 sshd[3222]: Accepted publickey for core from 10.0.0.1 port 46438 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:08.894263 sshd[3222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:08.897993 systemd-logind[1420]: New session 30 of user core. Feb 13 20:42:08.902058 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 20:42:09.008159 sshd[3222]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:09.011527 systemd[1]: sshd@29-10.0.0.7:22-10.0.0.1:46438.service: Deactivated successfully. Feb 13 20:42:09.013104 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 20:42:09.014294 systemd-logind[1420]: Session 30 logged out. Waiting for processes to exit. Feb 13 20:42:09.015148 systemd-logind[1420]: Removed session 30. Feb 13 20:42:13.574661 kubelet[2508]: E0213 20:42:13.574622 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:14.019359 systemd[1]: Started sshd@30-10.0.0.7:22-10.0.0.1:54464.service - OpenSSH per-connection server daemon (10.0.0.1:54464). Feb 13 20:42:14.054047 sshd[3239]: Accepted publickey for core from 10.0.0.1 port 54464 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:14.055209 sshd[3239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:14.058924 systemd-logind[1420]: New session 31 of user core. Feb 13 20:42:14.073071 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 20:42:14.178667 sshd[3239]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:14.181765 systemd[1]: sshd@30-10.0.0.7:22-10.0.0.1:54464.service: Deactivated successfully. Feb 13 20:42:14.183245 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 20:42:14.183769 systemd-logind[1420]: Session 31 logged out. Waiting for processes to exit. Feb 13 20:42:14.184606 systemd-logind[1420]: Removed session 31. Feb 13 20:42:15.512638 kubelet[2508]: E0213 20:42:15.512596 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:15.513365 kubelet[2508]: E0213 20:42:15.513322 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:42:18.575299 kubelet[2508]: E0213 20:42:18.575264 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:19.193422 systemd[1]: Started sshd@31-10.0.0.7:22-10.0.0.1:54472.service - OpenSSH per-connection server daemon (10.0.0.1:54472). Feb 13 20:42:19.227275 sshd[3254]: Accepted publickey for core from 10.0.0.1 port 54472 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:19.228387 sshd[3254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:19.231568 systemd-logind[1420]: New session 32 of user core. Feb 13 20:42:19.238071 systemd[1]: Started session-32.scope - Session 32 of User core. Feb 13 20:42:19.345136 sshd[3254]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:19.348252 systemd[1]: sshd@31-10.0.0.7:22-10.0.0.1:54472.service: Deactivated successfully. Feb 13 20:42:19.350208 systemd[1]: session-32.scope: Deactivated successfully. Feb 13 20:42:19.350772 systemd-logind[1420]: Session 32 logged out. Waiting for processes to exit. Feb 13 20:42:19.351782 systemd-logind[1420]: Removed session 32. Feb 13 20:42:23.512701 kubelet[2508]: E0213 20:42:23.512609 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:23.576555 kubelet[2508]: E0213 20:42:23.576499 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:24.355337 systemd[1]: Started sshd@32-10.0.0.7:22-10.0.0.1:60474.service - OpenSSH per-connection server daemon (10.0.0.1:60474). Feb 13 20:42:24.393578 sshd[3269]: Accepted publickey for core from 10.0.0.1 port 60474 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:24.394740 sshd[3269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:24.397939 systemd-logind[1420]: New session 33 of user core. Feb 13 20:42:24.404060 systemd[1]: Started session-33.scope - Session 33 of User core. Feb 13 20:42:24.507891 sshd[3269]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:24.511189 systemd[1]: sshd@32-10.0.0.7:22-10.0.0.1:60474.service: Deactivated successfully. Feb 13 20:42:24.513474 systemd[1]: session-33.scope: Deactivated successfully. Feb 13 20:42:24.514085 systemd-logind[1420]: Session 33 logged out. Waiting for processes to exit. Feb 13 20:42:24.514790 systemd-logind[1420]: Removed session 33. Feb 13 20:42:25.512656 kubelet[2508]: E0213 20:42:25.512620 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:28.577294 kubelet[2508]: E0213 20:42:28.577227 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:29.518517 systemd[1]: Started sshd@33-10.0.0.7:22-10.0.0.1:60486.service - OpenSSH per-connection server daemon (10.0.0.1:60486). Feb 13 20:42:29.552838 sshd[3286]: Accepted publickey for core from 10.0.0.1 port 60486 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:29.554051 sshd[3286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:29.557819 systemd-logind[1420]: New session 34 of user core. Feb 13 20:42:29.568068 systemd[1]: Started session-34.scope - Session 34 of User core. Feb 13 20:42:29.675789 sshd[3286]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:29.678300 systemd[1]: sshd@33-10.0.0.7:22-10.0.0.1:60486.service: Deactivated successfully. Feb 13 20:42:29.679833 systemd[1]: session-34.scope: Deactivated successfully. Feb 13 20:42:29.681114 systemd-logind[1420]: Session 34 logged out. Waiting for processes to exit. Feb 13 20:42:29.681911 systemd-logind[1420]: Removed session 34. Feb 13 20:42:30.512989 kubelet[2508]: E0213 20:42:30.512908 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:30.513897 kubelet[2508]: E0213 20:42:30.513857 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:30.514395 kubelet[2508]: E0213 20:42:30.514340 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:42:32.512907 kubelet[2508]: E0213 20:42:32.512875 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:33.578325 kubelet[2508]: E0213 20:42:33.578288 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:34.686288 systemd[1]: Started sshd@34-10.0.0.7:22-10.0.0.1:33558.service - OpenSSH per-connection server daemon (10.0.0.1:33558). Feb 13 20:42:34.720598 sshd[3302]: Accepted publickey for core from 10.0.0.1 port 33558 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:34.721715 sshd[3302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:34.724990 systemd-logind[1420]: New session 35 of user core. Feb 13 20:42:34.730065 systemd[1]: Started session-35.scope - Session 35 of User core. Feb 13 20:42:34.835440 sshd[3302]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:34.838489 systemd[1]: sshd@34-10.0.0.7:22-10.0.0.1:33558.service: Deactivated successfully. Feb 13 20:42:34.840025 systemd[1]: session-35.scope: Deactivated successfully. Feb 13 20:42:34.840593 systemd-logind[1420]: Session 35 logged out. Waiting for processes to exit. Feb 13 20:42:34.841357 systemd-logind[1420]: Removed session 35. Feb 13 20:42:38.579714 kubelet[2508]: E0213 20:42:38.579667 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:39.846326 systemd[1]: Started sshd@35-10.0.0.7:22-10.0.0.1:33562.service - OpenSSH per-connection server daemon (10.0.0.1:33562). Feb 13 20:42:39.880713 sshd[3319]: Accepted publickey for core from 10.0.0.1 port 33562 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:39.881855 sshd[3319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:39.885319 systemd-logind[1420]: New session 36 of user core. Feb 13 20:42:39.904063 systemd[1]: Started session-36.scope - Session 36 of User core. Feb 13 20:42:40.009158 sshd[3319]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:40.012373 systemd[1]: sshd@35-10.0.0.7:22-10.0.0.1:33562.service: Deactivated successfully. Feb 13 20:42:40.014556 systemd[1]: session-36.scope: Deactivated successfully. Feb 13 20:42:40.015127 systemd-logind[1420]: Session 36 logged out. Waiting for processes to exit. Feb 13 20:42:40.015888 systemd-logind[1420]: Removed session 36. Feb 13 20:42:43.580967 kubelet[2508]: E0213 20:42:43.580911 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:44.512700 kubelet[2508]: E0213 20:42:44.512470 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:44.513090 kubelet[2508]: E0213 20:42:44.513056 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:42:45.019202 systemd[1]: Started sshd@36-10.0.0.7:22-10.0.0.1:54654.service - OpenSSH per-connection server daemon (10.0.0.1:54654). Feb 13 20:42:45.052972 sshd[3334]: Accepted publickey for core from 10.0.0.1 port 54654 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:45.054088 sshd[3334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:45.057537 systemd-logind[1420]: New session 37 of user core. Feb 13 20:42:45.068125 systemd[1]: Started session-37.scope - Session 37 of User core. Feb 13 20:42:45.173545 sshd[3334]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:45.176667 systemd[1]: sshd@36-10.0.0.7:22-10.0.0.1:54654.service: Deactivated successfully. Feb 13 20:42:45.178821 systemd[1]: session-37.scope: Deactivated successfully. Feb 13 20:42:45.179524 systemd-logind[1420]: Session 37 logged out. Waiting for processes to exit. Feb 13 20:42:45.180305 systemd-logind[1420]: Removed session 37. Feb 13 20:42:48.582275 kubelet[2508]: E0213 20:42:48.582240 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:50.183497 systemd[1]: Started sshd@37-10.0.0.7:22-10.0.0.1:54660.service - OpenSSH per-connection server daemon (10.0.0.1:54660). Feb 13 20:42:50.217349 sshd[3349]: Accepted publickey for core from 10.0.0.1 port 54660 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:50.218573 sshd[3349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:50.222081 systemd-logind[1420]: New session 38 of user core. Feb 13 20:42:50.240075 systemd[1]: Started session-38.scope - Session 38 of User core. Feb 13 20:42:50.344715 sshd[3349]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:50.347763 systemd[1]: sshd@37-10.0.0.7:22-10.0.0.1:54660.service: Deactivated successfully. Feb 13 20:42:50.349332 systemd[1]: session-38.scope: Deactivated successfully. Feb 13 20:42:50.350543 systemd-logind[1420]: Session 38 logged out. Waiting for processes to exit. Feb 13 20:42:50.351335 systemd-logind[1420]: Removed session 38. Feb 13 20:42:53.583984 kubelet[2508]: E0213 20:42:53.583913 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:55.355985 systemd[1]: Started sshd@38-10.0.0.7:22-10.0.0.1:50622.service - OpenSSH per-connection server daemon (10.0.0.1:50622). Feb 13 20:42:55.389517 sshd[3366]: Accepted publickey for core from 10.0.0.1 port 50622 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:42:55.390690 sshd[3366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:42:55.393866 systemd-logind[1420]: New session 39 of user core. Feb 13 20:42:55.400090 systemd[1]: Started session-39.scope - Session 39 of User core. Feb 13 20:42:55.505462 sshd[3366]: pam_unix(sshd:session): session closed for user core Feb 13 20:42:55.508712 systemd[1]: sshd@38-10.0.0.7:22-10.0.0.1:50622.service: Deactivated successfully. Feb 13 20:42:55.510332 systemd[1]: session-39.scope: Deactivated successfully. Feb 13 20:42:55.511498 systemd-logind[1420]: Session 39 logged out. Waiting for processes to exit. Feb 13 20:42:55.512714 systemd-logind[1420]: Removed session 39. Feb 13 20:42:57.512621 kubelet[2508]: E0213 20:42:57.512580 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:42:57.513704 containerd[1447]: time="2025-02-13T20:42:57.513669417Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:42:58.584732 kubelet[2508]: E0213 20:42:58.584641 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:42:58.618847 containerd[1447]: time="2025-02-13T20:42:58.618753087Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:42:58.618847 containerd[1447]: time="2025-02-13T20:42:58.618778447Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:42:58.619235 kubelet[2508]: E0213 20:42:58.618951 2508 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:42:58.619235 kubelet[2508]: E0213 20:42:58.618990 2508 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:42:58.619335 kubelet[2508]: E0213 20:42:58.619066 2508 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64nxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-q7x5h_kube-flannel(fba936b1-e9bf-4b9d-8ced-8a880a98539b): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:42:58.619406 kubelet[2508]: E0213 20:42:58.619096 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:43:00.516343 systemd[1]: Started sshd@39-10.0.0.7:22-10.0.0.1:50628.service - OpenSSH per-connection server daemon (10.0.0.1:50628). Feb 13 20:43:00.550300 sshd[3382]: Accepted publickey for core from 10.0.0.1 port 50628 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:00.551464 sshd[3382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:00.555440 systemd-logind[1420]: New session 40 of user core. Feb 13 20:43:00.562135 systemd[1]: Started session-40.scope - Session 40 of User core. Feb 13 20:43:00.665671 sshd[3382]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:00.668811 systemd[1]: sshd@39-10.0.0.7:22-10.0.0.1:50628.service: Deactivated successfully. Feb 13 20:43:00.670603 systemd[1]: session-40.scope: Deactivated successfully. Feb 13 20:43:00.671228 systemd-logind[1420]: Session 40 logged out. Waiting for processes to exit. Feb 13 20:43:00.672106 systemd-logind[1420]: Removed session 40. Feb 13 20:43:03.586306 kubelet[2508]: E0213 20:43:03.586259 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:05.675513 systemd[1]: Started sshd@40-10.0.0.7:22-10.0.0.1:43718.service - OpenSSH per-connection server daemon (10.0.0.1:43718). Feb 13 20:43:05.710174 sshd[3397]: Accepted publickey for core from 10.0.0.1 port 43718 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:05.711450 sshd[3397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:05.715655 systemd-logind[1420]: New session 41 of user core. Feb 13 20:43:05.725098 systemd[1]: Started session-41.scope - Session 41 of User core. Feb 13 20:43:05.833228 sshd[3397]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:05.848688 systemd[1]: sshd@40-10.0.0.7:22-10.0.0.1:43718.service: Deactivated successfully. Feb 13 20:43:05.850106 systemd[1]: session-41.scope: Deactivated successfully. Feb 13 20:43:05.851358 systemd-logind[1420]: Session 41 logged out. Waiting for processes to exit. Feb 13 20:43:05.865299 systemd[1]: Started sshd@41-10.0.0.7:22-10.0.0.1:43722.service - OpenSSH per-connection server daemon (10.0.0.1:43722). Feb 13 20:43:05.866112 systemd-logind[1420]: Removed session 41. Feb 13 20:43:05.896309 sshd[3413]: Accepted publickey for core from 10.0.0.1 port 43722 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:05.897492 sshd[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:05.901416 systemd-logind[1420]: New session 42 of user core. Feb 13 20:43:05.912089 systemd[1]: Started session-42.scope - Session 42 of User core. Feb 13 20:43:06.052290 sshd[3413]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:06.060914 systemd[1]: sshd@41-10.0.0.7:22-10.0.0.1:43722.service: Deactivated successfully. Feb 13 20:43:06.063146 systemd[1]: session-42.scope: Deactivated successfully. Feb 13 20:43:06.064867 systemd-logind[1420]: Session 42 logged out. Waiting for processes to exit. Feb 13 20:43:06.074241 systemd[1]: Started sshd@42-10.0.0.7:22-10.0.0.1:43730.service - OpenSSH per-connection server daemon (10.0.0.1:43730). Feb 13 20:43:06.076270 systemd-logind[1420]: Removed session 42. Feb 13 20:43:06.118371 sshd[3425]: Accepted publickey for core from 10.0.0.1 port 43730 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:06.119563 sshd[3425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:06.123583 systemd-logind[1420]: New session 43 of user core. Feb 13 20:43:06.130081 systemd[1]: Started session-43.scope - Session 43 of User core. Feb 13 20:43:06.235224 sshd[3425]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:06.238543 systemd[1]: sshd@42-10.0.0.7:22-10.0.0.1:43730.service: Deactivated successfully. Feb 13 20:43:06.241271 systemd[1]: session-43.scope: Deactivated successfully. Feb 13 20:43:06.242305 systemd-logind[1420]: Session 43 logged out. Waiting for processes to exit. Feb 13 20:43:06.243202 systemd-logind[1420]: Removed session 43. Feb 13 20:43:08.587750 kubelet[2508]: E0213 20:43:08.587679 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:09.512049 kubelet[2508]: E0213 20:43:09.512007 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:09.512834 kubelet[2508]: E0213 20:43:09.512638 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:43:11.249369 systemd[1]: Started sshd@43-10.0.0.7:22-10.0.0.1:43742.service - OpenSSH per-connection server daemon (10.0.0.1:43742). Feb 13 20:43:11.283704 sshd[3439]: Accepted publickey for core from 10.0.0.1 port 43742 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:11.284861 sshd[3439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:11.288887 systemd-logind[1420]: New session 44 of user core. Feb 13 20:43:11.298095 systemd[1]: Started session-44.scope - Session 44 of User core. Feb 13 20:43:11.403871 sshd[3439]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:11.406963 systemd[1]: sshd@43-10.0.0.7:22-10.0.0.1:43742.service: Deactivated successfully. Feb 13 20:43:11.408577 systemd[1]: session-44.scope: Deactivated successfully. Feb 13 20:43:11.409866 systemd-logind[1420]: Session 44 logged out. Waiting for processes to exit. Feb 13 20:43:11.410712 systemd-logind[1420]: Removed session 44. Feb 13 20:43:13.589112 kubelet[2508]: E0213 20:43:13.589078 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:16.414334 systemd[1]: Started sshd@44-10.0.0.7:22-10.0.0.1:50226.service - OpenSSH per-connection server daemon (10.0.0.1:50226). Feb 13 20:43:16.449033 sshd[3453]: Accepted publickey for core from 10.0.0.1 port 50226 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:16.450356 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:16.453714 systemd-logind[1420]: New session 45 of user core. Feb 13 20:43:16.466058 systemd[1]: Started session-45.scope - Session 45 of User core. Feb 13 20:43:16.573095 sshd[3453]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:16.576414 systemd[1]: sshd@44-10.0.0.7:22-10.0.0.1:50226.service: Deactivated successfully. Feb 13 20:43:16.578649 systemd[1]: session-45.scope: Deactivated successfully. Feb 13 20:43:16.579667 systemd-logind[1420]: Session 45 logged out. Waiting for processes to exit. Feb 13 20:43:16.580923 systemd-logind[1420]: Removed session 45. Feb 13 20:43:18.590027 kubelet[2508]: E0213 20:43:18.589988 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:21.583692 systemd[1]: Started sshd@45-10.0.0.7:22-10.0.0.1:50236.service - OpenSSH per-connection server daemon (10.0.0.1:50236). Feb 13 20:43:21.617651 sshd[3467]: Accepted publickey for core from 10.0.0.1 port 50236 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:21.618807 sshd[3467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:21.622483 systemd-logind[1420]: New session 46 of user core. Feb 13 20:43:21.638088 systemd[1]: Started session-46.scope - Session 46 of User core. Feb 13 20:43:21.743430 sshd[3467]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:21.746534 systemd[1]: sshd@45-10.0.0.7:22-10.0.0.1:50236.service: Deactivated successfully. Feb 13 20:43:21.749386 systemd[1]: session-46.scope: Deactivated successfully. Feb 13 20:43:21.750033 systemd-logind[1420]: Session 46 logged out. Waiting for processes to exit. Feb 13 20:43:21.750770 systemd-logind[1420]: Removed session 46. Feb 13 20:43:22.512230 kubelet[2508]: E0213 20:43:22.512180 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:22.513302 kubelet[2508]: E0213 20:43:22.512969 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:43:23.591278 kubelet[2508]: E0213 20:43:23.591231 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:26.753415 systemd[1]: Started sshd@46-10.0.0.7:22-10.0.0.1:55124.service - OpenSSH per-connection server daemon (10.0.0.1:55124). Feb 13 20:43:26.788483 sshd[3484]: Accepted publickey for core from 10.0.0.1 port 55124 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:26.789654 sshd[3484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:26.792918 systemd-logind[1420]: New session 47 of user core. Feb 13 20:43:26.799070 systemd[1]: Started session-47.scope - Session 47 of User core. Feb 13 20:43:26.904725 sshd[3484]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:26.907851 systemd[1]: sshd@46-10.0.0.7:22-10.0.0.1:55124.service: Deactivated successfully. Feb 13 20:43:26.910611 systemd[1]: session-47.scope: Deactivated successfully. Feb 13 20:43:26.911298 systemd-logind[1420]: Session 47 logged out. Waiting for processes to exit. Feb 13 20:43:26.912167 systemd-logind[1420]: Removed session 47. Feb 13 20:43:28.592389 kubelet[2508]: E0213 20:43:28.592321 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:31.916327 systemd[1]: Started sshd@47-10.0.0.7:22-10.0.0.1:55128.service - OpenSSH per-connection server daemon (10.0.0.1:55128). Feb 13 20:43:31.950205 sshd[3498]: Accepted publickey for core from 10.0.0.1 port 55128 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:31.951346 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:31.954547 systemd-logind[1420]: New session 48 of user core. Feb 13 20:43:31.965138 systemd[1]: Started session-48.scope - Session 48 of User core. Feb 13 20:43:32.070279 sshd[3498]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:32.073312 systemd[1]: sshd@47-10.0.0.7:22-10.0.0.1:55128.service: Deactivated successfully. Feb 13 20:43:32.075684 systemd[1]: session-48.scope: Deactivated successfully. Feb 13 20:43:32.076251 systemd-logind[1420]: Session 48 logged out. Waiting for processes to exit. Feb 13 20:43:32.077240 systemd-logind[1420]: Removed session 48. Feb 13 20:43:33.512866 kubelet[2508]: E0213 20:43:33.512821 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:33.513630 kubelet[2508]: E0213 20:43:33.513593 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:43:33.593546 kubelet[2508]: E0213 20:43:33.593505 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:37.081397 systemd[1]: Started sshd@48-10.0.0.7:22-10.0.0.1:48544.service - OpenSSH per-connection server daemon (10.0.0.1:48544). Feb 13 20:43:37.115132 sshd[3513]: Accepted publickey for core from 10.0.0.1 port 48544 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:37.116275 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:37.120037 systemd-logind[1420]: New session 49 of user core. Feb 13 20:43:37.131105 systemd[1]: Started session-49.scope - Session 49 of User core. Feb 13 20:43:37.237191 sshd[3513]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:37.240246 systemd[1]: sshd@48-10.0.0.7:22-10.0.0.1:48544.service: Deactivated successfully. Feb 13 20:43:37.242514 systemd[1]: session-49.scope: Deactivated successfully. Feb 13 20:43:37.243622 systemd-logind[1420]: Session 49 logged out. Waiting for processes to exit. Feb 13 20:43:37.244478 systemd-logind[1420]: Removed session 49. Feb 13 20:43:38.512273 kubelet[2508]: E0213 20:43:38.512244 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:38.594842 kubelet[2508]: E0213 20:43:38.594807 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:42.247494 systemd[1]: Started sshd@49-10.0.0.7:22-10.0.0.1:48560.service - OpenSSH per-connection server daemon (10.0.0.1:48560). Feb 13 20:43:42.281993 sshd[3529]: Accepted publickey for core from 10.0.0.1 port 48560 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:42.283166 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:42.286803 systemd-logind[1420]: New session 50 of user core. Feb 13 20:43:42.304069 systemd[1]: Started session-50.scope - Session 50 of User core. Feb 13 20:43:42.409316 sshd[3529]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:42.412424 systemd[1]: sshd@49-10.0.0.7:22-10.0.0.1:48560.service: Deactivated successfully. Feb 13 20:43:42.413996 systemd[1]: session-50.scope: Deactivated successfully. Feb 13 20:43:42.414624 systemd-logind[1420]: Session 50 logged out. Waiting for processes to exit. Feb 13 20:43:42.415887 systemd-logind[1420]: Removed session 50. Feb 13 20:43:43.595912 kubelet[2508]: E0213 20:43:43.595867 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:44.512656 kubelet[2508]: E0213 20:43:44.512579 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:46.513506 kubelet[2508]: E0213 20:43:46.513473 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:47.419277 systemd[1]: Started sshd@50-10.0.0.7:22-10.0.0.1:33942.service - OpenSSH per-connection server daemon (10.0.0.1:33942). Feb 13 20:43:47.453302 sshd[3543]: Accepted publickey for core from 10.0.0.1 port 33942 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:47.454500 sshd[3543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:47.457878 systemd-logind[1420]: New session 51 of user core. Feb 13 20:43:47.470212 systemd[1]: Started session-51.scope - Session 51 of User core. Feb 13 20:43:47.512590 kubelet[2508]: E0213 20:43:47.512549 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:47.513112 kubelet[2508]: E0213 20:43:47.513084 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:47.514101 kubelet[2508]: E0213 20:43:47.514071 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:43:47.575643 sshd[3543]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:47.579437 systemd[1]: sshd@50-10.0.0.7:22-10.0.0.1:33942.service: Deactivated successfully. Feb 13 20:43:47.581059 systemd[1]: session-51.scope: Deactivated successfully. Feb 13 20:43:47.581668 systemd-logind[1420]: Session 51 logged out. Waiting for processes to exit. Feb 13 20:43:47.582583 systemd-logind[1420]: Removed session 51. Feb 13 20:43:48.597061 kubelet[2508]: E0213 20:43:48.597026 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:52.586314 systemd[1]: Started sshd@51-10.0.0.7:22-10.0.0.1:42056.service - OpenSSH per-connection server daemon (10.0.0.1:42056). Feb 13 20:43:52.620067 sshd[3557]: Accepted publickey for core from 10.0.0.1 port 42056 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:52.621321 sshd[3557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:52.624978 systemd-logind[1420]: New session 52 of user core. Feb 13 20:43:52.632075 systemd[1]: Started session-52.scope - Session 52 of User core. Feb 13 20:43:52.736572 sshd[3557]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:52.739557 systemd[1]: sshd@51-10.0.0.7:22-10.0.0.1:42056.service: Deactivated successfully. Feb 13 20:43:52.741061 systemd[1]: session-52.scope: Deactivated successfully. Feb 13 20:43:52.741562 systemd-logind[1420]: Session 52 logged out. Waiting for processes to exit. Feb 13 20:43:52.742294 systemd-logind[1420]: Removed session 52. Feb 13 20:43:53.598173 kubelet[2508]: E0213 20:43:53.598130 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:57.749122 systemd[1]: Started sshd@52-10.0.0.7:22-10.0.0.1:42064.service - OpenSSH per-connection server daemon (10.0.0.1:42064). Feb 13 20:43:57.782951 sshd[3573]: Accepted publickey for core from 10.0.0.1 port 42064 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:43:57.784077 sshd[3573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:43:57.787614 systemd-logind[1420]: New session 53 of user core. Feb 13 20:43:57.797078 systemd[1]: Started session-53.scope - Session 53 of User core. Feb 13 20:43:57.903985 sshd[3573]: pam_unix(sshd:session): session closed for user core Feb 13 20:43:57.906966 systemd[1]: sshd@52-10.0.0.7:22-10.0.0.1:42064.service: Deactivated successfully. Feb 13 20:43:57.909389 systemd[1]: session-53.scope: Deactivated successfully. Feb 13 20:43:57.910049 systemd-logind[1420]: Session 53 logged out. Waiting for processes to exit. Feb 13 20:43:57.911108 systemd-logind[1420]: Removed session 53. Feb 13 20:43:58.599339 kubelet[2508]: E0213 20:43:58.599280 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:43:59.512328 kubelet[2508]: E0213 20:43:59.512283 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:43:59.513084 kubelet[2508]: E0213 20:43:59.513060 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:44:02.918230 systemd[1]: Started sshd@53-10.0.0.7:22-10.0.0.1:54082.service - OpenSSH per-connection server daemon (10.0.0.1:54082). Feb 13 20:44:02.951705 sshd[3588]: Accepted publickey for core from 10.0.0.1 port 54082 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:02.952877 sshd[3588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:02.956115 systemd-logind[1420]: New session 54 of user core. Feb 13 20:44:02.965114 systemd[1]: Started session-54.scope - Session 54 of User core. Feb 13 20:44:03.070681 sshd[3588]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:03.073638 systemd[1]: sshd@53-10.0.0.7:22-10.0.0.1:54082.service: Deactivated successfully. Feb 13 20:44:03.075287 systemd[1]: session-54.scope: Deactivated successfully. Feb 13 20:44:03.076402 systemd-logind[1420]: Session 54 logged out. Waiting for processes to exit. Feb 13 20:44:03.077182 systemd-logind[1420]: Removed session 54. Feb 13 20:44:03.600882 kubelet[2508]: E0213 20:44:03.600841 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:08.081384 systemd[1]: Started sshd@54-10.0.0.7:22-10.0.0.1:54090.service - OpenSSH per-connection server daemon (10.0.0.1:54090). Feb 13 20:44:08.115533 sshd[3602]: Accepted publickey for core from 10.0.0.1 port 54090 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:08.116663 sshd[3602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:08.120278 systemd-logind[1420]: New session 55 of user core. Feb 13 20:44:08.127077 systemd[1]: Started session-55.scope - Session 55 of User core. Feb 13 20:44:08.230678 sshd[3602]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:08.233745 systemd[1]: sshd@54-10.0.0.7:22-10.0.0.1:54090.service: Deactivated successfully. Feb 13 20:44:08.235224 systemd[1]: session-55.scope: Deactivated successfully. Feb 13 20:44:08.236648 systemd-logind[1420]: Session 55 logged out. Waiting for processes to exit. Feb 13 20:44:08.237847 systemd-logind[1420]: Removed session 55. Feb 13 20:44:08.601626 kubelet[2508]: E0213 20:44:08.601588 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:12.513313 kubelet[2508]: E0213 20:44:12.513281 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:12.514318 kubelet[2508]: E0213 20:44:12.514212 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:44:13.241542 systemd[1]: Started sshd@55-10.0.0.7:22-10.0.0.1:54542.service - OpenSSH per-connection server daemon (10.0.0.1:54542). Feb 13 20:44:13.275459 sshd[3616]: Accepted publickey for core from 10.0.0.1 port 54542 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:13.276711 sshd[3616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:13.280257 systemd-logind[1420]: New session 56 of user core. Feb 13 20:44:13.295066 systemd[1]: Started session-56.scope - Session 56 of User core. Feb 13 20:44:13.401626 sshd[3616]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:13.404565 systemd[1]: sshd@55-10.0.0.7:22-10.0.0.1:54542.service: Deactivated successfully. Feb 13 20:44:13.406057 systemd[1]: session-56.scope: Deactivated successfully. Feb 13 20:44:13.408084 systemd-logind[1420]: Session 56 logged out. Waiting for processes to exit. Feb 13 20:44:13.408803 systemd-logind[1420]: Removed session 56. Feb 13 20:44:13.602411 kubelet[2508]: E0213 20:44:13.602283 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:18.411377 systemd[1]: Started sshd@56-10.0.0.7:22-10.0.0.1:54546.service - OpenSSH per-connection server daemon (10.0.0.1:54546). Feb 13 20:44:18.445242 sshd[3631]: Accepted publickey for core from 10.0.0.1 port 54546 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:18.446359 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:18.449728 systemd-logind[1420]: New session 57 of user core. Feb 13 20:44:18.456116 systemd[1]: Started session-57.scope - Session 57 of User core. Feb 13 20:44:18.563486 sshd[3631]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:18.566001 systemd[1]: sshd@56-10.0.0.7:22-10.0.0.1:54546.service: Deactivated successfully. Feb 13 20:44:18.567463 systemd[1]: session-57.scope: Deactivated successfully. Feb 13 20:44:18.568606 systemd-logind[1420]: Session 57 logged out. Waiting for processes to exit. Feb 13 20:44:18.569375 systemd-logind[1420]: Removed session 57. Feb 13 20:44:18.603079 kubelet[2508]: E0213 20:44:18.603042 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:23.574281 systemd[1]: Started sshd@57-10.0.0.7:22-10.0.0.1:49280.service - OpenSSH per-connection server daemon (10.0.0.1:49280). Feb 13 20:44:23.604470 kubelet[2508]: E0213 20:44:23.604424 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:23.608507 sshd[3645]: Accepted publickey for core from 10.0.0.1 port 49280 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:23.609641 sshd[3645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:23.612830 systemd-logind[1420]: New session 58 of user core. Feb 13 20:44:23.622052 systemd[1]: Started session-58.scope - Session 58 of User core. Feb 13 20:44:23.727468 sshd[3645]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:23.729857 systemd[1]: session-58.scope: Deactivated successfully. Feb 13 20:44:23.731099 systemd[1]: sshd@57-10.0.0.7:22-10.0.0.1:49280.service: Deactivated successfully. Feb 13 20:44:23.732852 systemd-logind[1420]: Session 58 logged out. Waiting for processes to exit. Feb 13 20:44:23.733517 systemd-logind[1420]: Removed session 58. Feb 13 20:44:24.512760 kubelet[2508]: E0213 20:44:24.512690 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:24.513405 kubelet[2508]: E0213 20:44:24.513378 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:44:28.605877 kubelet[2508]: E0213 20:44:28.605831 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:28.741302 systemd[1]: Started sshd@58-10.0.0.7:22-10.0.0.1:49292.service - OpenSSH per-connection server daemon (10.0.0.1:49292). Feb 13 20:44:28.775324 sshd[3661]: Accepted publickey for core from 10.0.0.1 port 49292 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:28.776510 sshd[3661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:28.780008 systemd-logind[1420]: New session 59 of user core. Feb 13 20:44:28.790070 systemd[1]: Started session-59.scope - Session 59 of User core. Feb 13 20:44:28.894630 sshd[3661]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:28.897918 systemd[1]: sshd@58-10.0.0.7:22-10.0.0.1:49292.service: Deactivated successfully. Feb 13 20:44:28.900497 systemd[1]: session-59.scope: Deactivated successfully. Feb 13 20:44:28.901376 systemd-logind[1420]: Session 59 logged out. Waiting for processes to exit. Feb 13 20:44:28.902156 systemd-logind[1420]: Removed session 59. Feb 13 20:44:33.607510 kubelet[2508]: E0213 20:44:33.607460 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:33.905358 systemd[1]: Started sshd@59-10.0.0.7:22-10.0.0.1:57272.service - OpenSSH per-connection server daemon (10.0.0.1:57272). Feb 13 20:44:33.939753 sshd[3675]: Accepted publickey for core from 10.0.0.1 port 57272 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:33.940957 sshd[3675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:33.944176 systemd-logind[1420]: New session 60 of user core. Feb 13 20:44:33.964083 systemd[1]: Started session-60.scope - Session 60 of User core. Feb 13 20:44:34.071800 sshd[3675]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:34.075193 systemd[1]: sshd@59-10.0.0.7:22-10.0.0.1:57272.service: Deactivated successfully. Feb 13 20:44:34.077839 systemd[1]: session-60.scope: Deactivated successfully. Feb 13 20:44:34.078643 systemd-logind[1420]: Session 60 logged out. Waiting for processes to exit. Feb 13 20:44:34.079461 systemd-logind[1420]: Removed session 60. Feb 13 20:44:37.512454 kubelet[2508]: E0213 20:44:37.512414 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:37.513263 kubelet[2508]: E0213 20:44:37.513005 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:44:38.608279 kubelet[2508]: E0213 20:44:38.608214 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:39.084483 systemd[1]: Started sshd@60-10.0.0.7:22-10.0.0.1:57282.service - OpenSSH per-connection server daemon (10.0.0.1:57282). Feb 13 20:44:39.118569 sshd[3692]: Accepted publickey for core from 10.0.0.1 port 57282 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:39.119732 sshd[3692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:39.123361 systemd-logind[1420]: New session 61 of user core. Feb 13 20:44:39.131048 systemd[1]: Started session-61.scope - Session 61 of User core. Feb 13 20:44:39.236335 sshd[3692]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:39.240123 systemd[1]: sshd@60-10.0.0.7:22-10.0.0.1:57282.service: Deactivated successfully. Feb 13 20:44:39.241768 systemd[1]: session-61.scope: Deactivated successfully. Feb 13 20:44:39.242361 systemd-logind[1420]: Session 61 logged out. Waiting for processes to exit. Feb 13 20:44:39.243076 systemd-logind[1420]: Removed session 61. Feb 13 20:44:43.609213 kubelet[2508]: E0213 20:44:43.609168 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:44.246291 systemd[1]: Started sshd@61-10.0.0.7:22-10.0.0.1:40804.service - OpenSSH per-connection server daemon (10.0.0.1:40804). Feb 13 20:44:44.280554 sshd[3707]: Accepted publickey for core from 10.0.0.1 port 40804 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:44.281737 sshd[3707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:44.285170 systemd-logind[1420]: New session 62 of user core. Feb 13 20:44:44.299068 systemd[1]: Started session-62.scope - Session 62 of User core. Feb 13 20:44:44.403341 sshd[3707]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:44.406679 systemd[1]: sshd@61-10.0.0.7:22-10.0.0.1:40804.service: Deactivated successfully. Feb 13 20:44:44.408215 systemd[1]: session-62.scope: Deactivated successfully. Feb 13 20:44:44.409530 systemd-logind[1420]: Session 62 logged out. Waiting for processes to exit. Feb 13 20:44:44.410592 systemd-logind[1420]: Removed session 62. Feb 13 20:44:48.513223 kubelet[2508]: E0213 20:44:48.513121 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:48.609740 kubelet[2508]: E0213 20:44:48.609694 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:49.413363 systemd[1]: Started sshd@62-10.0.0.7:22-10.0.0.1:40818.service - OpenSSH per-connection server daemon (10.0.0.1:40818). Feb 13 20:44:49.447356 sshd[3724]: Accepted publickey for core from 10.0.0.1 port 40818 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:49.448476 sshd[3724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:49.452177 systemd-logind[1420]: New session 63 of user core. Feb 13 20:44:49.463071 systemd[1]: Started session-63.scope - Session 63 of User core. Feb 13 20:44:49.512599 kubelet[2508]: E0213 20:44:49.512565 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:49.567179 sshd[3724]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:49.570252 systemd[1]: sshd@62-10.0.0.7:22-10.0.0.1:40818.service: Deactivated successfully. Feb 13 20:44:49.571870 systemd[1]: session-63.scope: Deactivated successfully. Feb 13 20:44:49.572444 systemd-logind[1420]: Session 63 logged out. Waiting for processes to exit. Feb 13 20:44:49.573418 systemd-logind[1420]: Removed session 63. Feb 13 20:44:51.512154 kubelet[2508]: E0213 20:44:51.512116 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:51.513119 kubelet[2508]: E0213 20:44:51.512898 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:44:53.610452 kubelet[2508]: E0213 20:44:53.610416 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:54.577258 systemd[1]: Started sshd@63-10.0.0.7:22-10.0.0.1:57046.service - OpenSSH per-connection server daemon (10.0.0.1:57046). Feb 13 20:44:54.611588 sshd[3740]: Accepted publickey for core from 10.0.0.1 port 57046 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:54.612824 sshd[3740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:54.616125 systemd-logind[1420]: New session 64 of user core. Feb 13 20:44:54.627057 systemd[1]: Started session-64.scope - Session 64 of User core. Feb 13 20:44:54.730614 sshd[3740]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:54.734384 systemd[1]: sshd@63-10.0.0.7:22-10.0.0.1:57046.service: Deactivated successfully. Feb 13 20:44:54.736608 systemd[1]: session-64.scope: Deactivated successfully. Feb 13 20:44:54.737233 systemd-logind[1420]: Session 64 logged out. Waiting for processes to exit. Feb 13 20:44:54.738173 systemd-logind[1420]: Removed session 64. Feb 13 20:44:56.512648 kubelet[2508]: E0213 20:44:56.512388 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:58.611684 kubelet[2508]: E0213 20:44:58.611623 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:44:59.512403 kubelet[2508]: E0213 20:44:59.512357 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:44:59.740434 systemd[1]: Started sshd@64-10.0.0.7:22-10.0.0.1:57050.service - OpenSSH per-connection server daemon (10.0.0.1:57050). Feb 13 20:44:59.774473 sshd[3756]: Accepted publickey for core from 10.0.0.1 port 57050 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:44:59.775396 sshd[3756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:44:59.778698 systemd-logind[1420]: New session 65 of user core. Feb 13 20:44:59.791070 systemd[1]: Started session-65.scope - Session 65 of User core. Feb 13 20:44:59.897021 sshd[3756]: pam_unix(sshd:session): session closed for user core Feb 13 20:44:59.900122 systemd[1]: sshd@64-10.0.0.7:22-10.0.0.1:57050.service: Deactivated successfully. Feb 13 20:44:59.902503 systemd[1]: session-65.scope: Deactivated successfully. Feb 13 20:44:59.903189 systemd-logind[1420]: Session 65 logged out. Waiting for processes to exit. Feb 13 20:44:59.903892 systemd-logind[1420]: Removed session 65. Feb 13 20:45:03.512686 kubelet[2508]: E0213 20:45:03.512549 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:03.513267 kubelet[2508]: E0213 20:45:03.513234 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:45:03.613293 kubelet[2508]: E0213 20:45:03.613216 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:04.907414 systemd[1]: Started sshd@65-10.0.0.7:22-10.0.0.1:33636.service - OpenSSH per-connection server daemon (10.0.0.1:33636). Feb 13 20:45:04.941343 sshd[3771]: Accepted publickey for core from 10.0.0.1 port 33636 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:04.942513 sshd[3771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:04.946162 systemd-logind[1420]: New session 66 of user core. Feb 13 20:45:04.953087 systemd[1]: Started session-66.scope - Session 66 of User core. Feb 13 20:45:05.057725 sshd[3771]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:05.060186 systemd[1]: sshd@65-10.0.0.7:22-10.0.0.1:33636.service: Deactivated successfully. Feb 13 20:45:05.061713 systemd[1]: session-66.scope: Deactivated successfully. Feb 13 20:45:05.062927 systemd-logind[1420]: Session 66 logged out. Waiting for processes to exit. Feb 13 20:45:05.064575 systemd-logind[1420]: Removed session 66. Feb 13 20:45:08.613890 kubelet[2508]: E0213 20:45:08.613854 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:10.068273 systemd[1]: Started sshd@66-10.0.0.7:22-10.0.0.1:33650.service - OpenSSH per-connection server daemon (10.0.0.1:33650). Feb 13 20:45:10.102747 sshd[3785]: Accepted publickey for core from 10.0.0.1 port 33650 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:10.103956 sshd[3785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:10.107227 systemd-logind[1420]: New session 67 of user core. Feb 13 20:45:10.117069 systemd[1]: Started session-67.scope - Session 67 of User core. Feb 13 20:45:10.223738 sshd[3785]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:10.226852 systemd[1]: sshd@66-10.0.0.7:22-10.0.0.1:33650.service: Deactivated successfully. Feb 13 20:45:10.229454 systemd[1]: session-67.scope: Deactivated successfully. Feb 13 20:45:10.230267 systemd-logind[1420]: Session 67 logged out. Waiting for processes to exit. Feb 13 20:45:10.231086 systemd-logind[1420]: Removed session 67. Feb 13 20:45:13.614783 kubelet[2508]: E0213 20:45:13.614736 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:14.512212 kubelet[2508]: E0213 20:45:14.512161 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:14.512738 kubelet[2508]: E0213 20:45:14.512697 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:45:15.235243 systemd[1]: Started sshd@67-10.0.0.7:22-10.0.0.1:55264.service - OpenSSH per-connection server daemon (10.0.0.1:55264). Feb 13 20:45:15.269656 sshd[3799]: Accepted publickey for core from 10.0.0.1 port 55264 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:15.270825 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:15.274085 systemd-logind[1420]: New session 68 of user core. Feb 13 20:45:15.284099 systemd[1]: Started session-68.scope - Session 68 of User core. Feb 13 20:45:15.389276 sshd[3799]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:15.392555 systemd[1]: sshd@67-10.0.0.7:22-10.0.0.1:55264.service: Deactivated successfully. Feb 13 20:45:15.394760 systemd[1]: session-68.scope: Deactivated successfully. Feb 13 20:45:15.395709 systemd-logind[1420]: Session 68 logged out. Waiting for processes to exit. Feb 13 20:45:15.396559 systemd-logind[1420]: Removed session 68. Feb 13 20:45:18.616305 kubelet[2508]: E0213 20:45:18.616268 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:20.399356 systemd[1]: Started sshd@68-10.0.0.7:22-10.0.0.1:55280.service - OpenSSH per-connection server daemon (10.0.0.1:55280). Feb 13 20:45:20.433707 sshd[3814]: Accepted publickey for core from 10.0.0.1 port 55280 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:20.434909 sshd[3814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:20.438495 systemd-logind[1420]: New session 69 of user core. Feb 13 20:45:20.454066 systemd[1]: Started session-69.scope - Session 69 of User core. Feb 13 20:45:20.561512 sshd[3814]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:20.564838 systemd[1]: sshd@68-10.0.0.7:22-10.0.0.1:55280.service: Deactivated successfully. Feb 13 20:45:20.566786 systemd[1]: session-69.scope: Deactivated successfully. Feb 13 20:45:20.567478 systemd-logind[1420]: Session 69 logged out. Waiting for processes to exit. Feb 13 20:45:20.568504 systemd-logind[1420]: Removed session 69. Feb 13 20:45:23.617644 kubelet[2508]: E0213 20:45:23.617596 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:25.572385 systemd[1]: Started sshd@69-10.0.0.7:22-10.0.0.1:55094.service - OpenSSH per-connection server daemon (10.0.0.1:55094). Feb 13 20:45:25.607118 sshd[3830]: Accepted publickey for core from 10.0.0.1 port 55094 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:25.608240 sshd[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:25.611516 systemd-logind[1420]: New session 70 of user core. Feb 13 20:45:25.618080 systemd[1]: Started session-70.scope - Session 70 of User core. Feb 13 20:45:25.723153 sshd[3830]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:25.726227 systemd[1]: sshd@69-10.0.0.7:22-10.0.0.1:55094.service: Deactivated successfully. Feb 13 20:45:25.728400 systemd[1]: session-70.scope: Deactivated successfully. Feb 13 20:45:25.729038 systemd-logind[1420]: Session 70 logged out. Waiting for processes to exit. Feb 13 20:45:25.729793 systemd-logind[1420]: Removed session 70. Feb 13 20:45:27.512825 kubelet[2508]: E0213 20:45:27.512788 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:27.513611 kubelet[2508]: E0213 20:45:27.513571 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:45:28.618746 kubelet[2508]: E0213 20:45:28.618701 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:30.733351 systemd[1]: Started sshd@70-10.0.0.7:22-10.0.0.1:55096.service - OpenSSH per-connection server daemon (10.0.0.1:55096). Feb 13 20:45:30.767525 sshd[3844]: Accepted publickey for core from 10.0.0.1 port 55096 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:30.768702 sshd[3844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:30.772076 systemd-logind[1420]: New session 71 of user core. Feb 13 20:45:30.779071 systemd[1]: Started session-71.scope - Session 71 of User core. Feb 13 20:45:30.883279 sshd[3844]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:30.886447 systemd[1]: sshd@70-10.0.0.7:22-10.0.0.1:55096.service: Deactivated successfully. Feb 13 20:45:30.888015 systemd[1]: session-71.scope: Deactivated successfully. Feb 13 20:45:30.888604 systemd-logind[1420]: Session 71 logged out. Waiting for processes to exit. Feb 13 20:45:30.889728 systemd-logind[1420]: Removed session 71. Feb 13 20:45:33.620287 kubelet[2508]: E0213 20:45:33.620237 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:35.894312 systemd[1]: Started sshd@71-10.0.0.7:22-10.0.0.1:49996.service - OpenSSH per-connection server daemon (10.0.0.1:49996). Feb 13 20:45:35.928417 sshd[3859]: Accepted publickey for core from 10.0.0.1 port 49996 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:35.929576 sshd[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:35.933552 systemd-logind[1420]: New session 72 of user core. Feb 13 20:45:35.947065 systemd[1]: Started session-72.scope - Session 72 of User core. Feb 13 20:45:36.055257 sshd[3859]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:36.058443 systemd[1]: sshd@71-10.0.0.7:22-10.0.0.1:49996.service: Deactivated successfully. Feb 13 20:45:36.060103 systemd[1]: session-72.scope: Deactivated successfully. Feb 13 20:45:36.060787 systemd-logind[1420]: Session 72 logged out. Waiting for processes to exit. Feb 13 20:45:36.061783 systemd-logind[1420]: Removed session 72. Feb 13 20:45:38.620755 kubelet[2508]: E0213 20:45:38.620725 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:39.512915 kubelet[2508]: E0213 20:45:39.512687 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:39.513732 containerd[1447]: time="2025-02-13T20:45:39.513676981Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 20:45:40.623000 containerd[1447]: time="2025-02-13T20:45:40.622929704Z" level=error msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" failed" error="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" Feb 13 20:45:40.623335 containerd[1447]: time="2025-02-13T20:45:40.623007384Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=11110" Feb 13 20:45:40.623390 kubelet[2508]: E0213 20:45:40.623152 2508 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:45:40.623390 kubelet[2508]: E0213 20:45:40.623201 2508 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/flannel/flannel-cni-plugin:v1.1.2" Feb 13 20:45:40.623631 kubelet[2508]: E0213 20:45:40.623296 2508 kuberuntime_manager.go:1256] init container &Container{Name:install-cni-plugin,Image:docker.io/flannel/flannel-cni-plugin:v1.1.2,Command:[cp],Args:[-f /flannel /opt/cni/bin/flannel],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-plugin,ReadOnly:false,MountPath:/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64nxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-flannel-ds-q7x5h_kube-flannel(fba936b1-e9bf-4b9d-8ced-8a880a98539b): ErrImagePull: failed to pull and unpack image "docker.io/flannel/flannel-cni-plugin:v1.1.2": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Feb 13 20:45:40.623691 kubelet[2508]: E0213 20:45:40.623331 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/flannel/flannel-cni-plugin/manifests/sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:45:41.067322 systemd[1]: Started sshd@72-10.0.0.7:22-10.0.0.1:50002.service - OpenSSH per-connection server daemon (10.0.0.1:50002). Feb 13 20:45:41.101600 sshd[3875]: Accepted publickey for core from 10.0.0.1 port 50002 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:41.102788 sshd[3875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:41.106481 systemd-logind[1420]: New session 73 of user core. Feb 13 20:45:41.112059 systemd[1]: Started session-73.scope - Session 73 of User core. Feb 13 20:45:41.218108 sshd[3875]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:41.221261 systemd[1]: sshd@72-10.0.0.7:22-10.0.0.1:50002.service: Deactivated successfully. Feb 13 20:45:41.222832 systemd[1]: session-73.scope: Deactivated successfully. Feb 13 20:45:41.223410 systemd-logind[1420]: Session 73 logged out. Waiting for processes to exit. Feb 13 20:45:41.224271 systemd-logind[1420]: Removed session 73. Feb 13 20:45:43.621881 kubelet[2508]: E0213 20:45:43.621834 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:46.228238 systemd[1]: Started sshd@73-10.0.0.7:22-10.0.0.1:34786.service - OpenSSH per-connection server daemon (10.0.0.1:34786). Feb 13 20:45:46.262560 sshd[3889]: Accepted publickey for core from 10.0.0.1 port 34786 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:46.263707 sshd[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:46.267740 systemd-logind[1420]: New session 74 of user core. Feb 13 20:45:46.283077 systemd[1]: Started session-74.scope - Session 74 of User core. Feb 13 20:45:46.388211 sshd[3889]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:46.391384 systemd[1]: sshd@73-10.0.0.7:22-10.0.0.1:34786.service: Deactivated successfully. Feb 13 20:45:46.393240 systemd[1]: session-74.scope: Deactivated successfully. Feb 13 20:45:46.393907 systemd-logind[1420]: Session 74 logged out. Waiting for processes to exit. Feb 13 20:45:46.394662 systemd-logind[1420]: Removed session 74. Feb 13 20:45:48.623008 kubelet[2508]: E0213 20:45:48.622974 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:51.402278 systemd[1]: Started sshd@74-10.0.0.7:22-10.0.0.1:34794.service - OpenSSH per-connection server daemon (10.0.0.1:34794). Feb 13 20:45:51.436261 sshd[3904]: Accepted publickey for core from 10.0.0.1 port 34794 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:51.437415 sshd[3904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:51.440807 systemd-logind[1420]: New session 75 of user core. Feb 13 20:45:51.450056 systemd[1]: Started session-75.scope - Session 75 of User core. Feb 13 20:45:51.512882 kubelet[2508]: E0213 20:45:51.512845 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:51.557488 sshd[3904]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:51.560664 systemd[1]: sshd@74-10.0.0.7:22-10.0.0.1:34794.service: Deactivated successfully. Feb 13 20:45:51.562191 systemd[1]: session-75.scope: Deactivated successfully. Feb 13 20:45:51.562793 systemd-logind[1420]: Session 75 logged out. Waiting for processes to exit. Feb 13 20:45:51.563905 systemd-logind[1420]: Removed session 75. Feb 13 20:45:53.624306 kubelet[2508]: E0213 20:45:53.624269 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:45:55.512332 kubelet[2508]: E0213 20:45:55.512290 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:45:55.513470 kubelet[2508]: E0213 20:45:55.513438 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:45:56.568304 systemd[1]: Started sshd@75-10.0.0.7:22-10.0.0.1:51496.service - OpenSSH per-connection server daemon (10.0.0.1:51496). Feb 13 20:45:56.604285 sshd[3920]: Accepted publickey for core from 10.0.0.1 port 51496 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:45:56.605478 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:45:56.608759 systemd-logind[1420]: New session 76 of user core. Feb 13 20:45:56.624091 systemd[1]: Started session-76.scope - Session 76 of User core. Feb 13 20:45:56.733961 sshd[3920]: pam_unix(sshd:session): session closed for user core Feb 13 20:45:56.737232 systemd[1]: sshd@75-10.0.0.7:22-10.0.0.1:51496.service: Deactivated successfully. Feb 13 20:45:56.739626 systemd[1]: session-76.scope: Deactivated successfully. Feb 13 20:45:56.740394 systemd-logind[1420]: Session 76 logged out. Waiting for processes to exit. Feb 13 20:45:56.741180 systemd-logind[1420]: Removed session 76. Feb 13 20:45:58.625063 kubelet[2508]: E0213 20:45:58.625019 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:00.513247 kubelet[2508]: E0213 20:46:00.513208 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:01.744297 systemd[1]: Started sshd@76-10.0.0.7:22-10.0.0.1:51510.service - OpenSSH per-connection server daemon (10.0.0.1:51510). Feb 13 20:46:01.778514 sshd[3934]: Accepted publickey for core from 10.0.0.1 port 51510 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:01.779729 sshd[3934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:01.782964 systemd-logind[1420]: New session 77 of user core. Feb 13 20:46:01.794084 systemd[1]: Started session-77.scope - Session 77 of User core. Feb 13 20:46:01.899790 sshd[3934]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:01.902929 systemd[1]: sshd@76-10.0.0.7:22-10.0.0.1:51510.service: Deactivated successfully. Feb 13 20:46:01.905397 systemd[1]: session-77.scope: Deactivated successfully. Feb 13 20:46:01.906373 systemd-logind[1420]: Session 77 logged out. Waiting for processes to exit. Feb 13 20:46:01.907297 systemd-logind[1420]: Removed session 77. Feb 13 20:46:03.625875 kubelet[2508]: E0213 20:46:03.625838 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:06.910495 systemd[1]: Started sshd@77-10.0.0.7:22-10.0.0.1:41020.service - OpenSSH per-connection server daemon (10.0.0.1:41020). Feb 13 20:46:06.946264 sshd[3948]: Accepted publickey for core from 10.0.0.1 port 41020 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:06.947494 sshd[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:06.950869 systemd-logind[1420]: New session 78 of user core. Feb 13 20:46:06.960101 systemd[1]: Started session-78.scope - Session 78 of User core. Feb 13 20:46:07.066676 sshd[3948]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:07.079338 systemd[1]: sshd@77-10.0.0.7:22-10.0.0.1:41020.service: Deactivated successfully. Feb 13 20:46:07.080695 systemd[1]: session-78.scope: Deactivated successfully. Feb 13 20:46:07.082466 systemd-logind[1420]: Session 78 logged out. Waiting for processes to exit. Feb 13 20:46:07.090352 systemd[1]: Started sshd@78-10.0.0.7:22-10.0.0.1:41022.service - OpenSSH per-connection server daemon (10.0.0.1:41022). Feb 13 20:46:07.091235 systemd-logind[1420]: Removed session 78. Feb 13 20:46:07.120269 sshd[3962]: Accepted publickey for core from 10.0.0.1 port 41022 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:07.121502 sshd[3962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:07.125584 systemd-logind[1420]: New session 79 of user core. Feb 13 20:46:07.134080 systemd[1]: Started session-79.scope - Session 79 of User core. Feb 13 20:46:07.313053 sshd[3962]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:07.319102 systemd[1]: sshd@78-10.0.0.7:22-10.0.0.1:41022.service: Deactivated successfully. Feb 13 20:46:07.320360 systemd[1]: session-79.scope: Deactivated successfully. Feb 13 20:46:07.322096 systemd-logind[1420]: Session 79 logged out. Waiting for processes to exit. Feb 13 20:46:07.322679 systemd[1]: Started sshd@79-10.0.0.7:22-10.0.0.1:41034.service - OpenSSH per-connection server daemon (10.0.0.1:41034). Feb 13 20:46:07.323467 systemd-logind[1420]: Removed session 79. Feb 13 20:46:07.356489 sshd[3975]: Accepted publickey for core from 10.0.0.1 port 41034 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:07.357618 sshd[3975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:07.361108 systemd-logind[1420]: New session 80 of user core. Feb 13 20:46:07.372066 systemd[1]: Started session-80.scope - Session 80 of User core. Feb 13 20:46:08.359387 sshd[3975]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:08.369208 systemd[1]: sshd@79-10.0.0.7:22-10.0.0.1:41034.service: Deactivated successfully. Feb 13 20:46:08.372661 systemd[1]: session-80.scope: Deactivated successfully. Feb 13 20:46:08.374362 systemd-logind[1420]: Session 80 logged out. Waiting for processes to exit. Feb 13 20:46:08.385275 systemd[1]: Started sshd@80-10.0.0.7:22-10.0.0.1:41038.service - OpenSSH per-connection server daemon (10.0.0.1:41038). Feb 13 20:46:08.386271 systemd-logind[1420]: Removed session 80. Feb 13 20:46:08.417356 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 41038 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:08.418707 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:08.422305 systemd-logind[1420]: New session 81 of user core. Feb 13 20:46:08.433084 systemd[1]: Started session-81.scope - Session 81 of User core. Feb 13 20:46:08.626838 kubelet[2508]: E0213 20:46:08.626791 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:08.641312 sshd[3999]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:08.653480 systemd[1]: sshd@80-10.0.0.7:22-10.0.0.1:41038.service: Deactivated successfully. Feb 13 20:46:08.656233 systemd[1]: session-81.scope: Deactivated successfully. Feb 13 20:46:08.657942 systemd-logind[1420]: Session 81 logged out. Waiting for processes to exit. Feb 13 20:46:08.658557 systemd[1]: Started sshd@81-10.0.0.7:22-10.0.0.1:41040.service - OpenSSH per-connection server daemon (10.0.0.1:41040). Feb 13 20:46:08.660064 systemd-logind[1420]: Removed session 81. Feb 13 20:46:08.692825 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 41040 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:08.694101 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:08.698154 systemd-logind[1420]: New session 82 of user core. Feb 13 20:46:08.708096 systemd[1]: Started session-82.scope - Session 82 of User core. Feb 13 20:46:08.811676 sshd[4012]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:08.814992 systemd[1]: sshd@81-10.0.0.7:22-10.0.0.1:41040.service: Deactivated successfully. Feb 13 20:46:08.818011 systemd[1]: session-82.scope: Deactivated successfully. Feb 13 20:46:08.819064 systemd-logind[1420]: Session 82 logged out. Waiting for processes to exit. Feb 13 20:46:08.820018 systemd-logind[1420]: Removed session 82. Feb 13 20:46:10.512450 kubelet[2508]: E0213 20:46:10.512408 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:10.513072 kubelet[2508]: E0213 20:46:10.513034 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:46:13.628352 kubelet[2508]: E0213 20:46:13.628296 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:13.822694 systemd[1]: Started sshd@82-10.0.0.7:22-10.0.0.1:39108.service - OpenSSH per-connection server daemon (10.0.0.1:39108). Feb 13 20:46:13.857165 sshd[4026]: Accepted publickey for core from 10.0.0.1 port 39108 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:13.858398 sshd[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:13.861990 systemd-logind[1420]: New session 83 of user core. Feb 13 20:46:13.872092 systemd[1]: Started session-83.scope - Session 83 of User core. Feb 13 20:46:13.976511 sshd[4026]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:13.980761 systemd[1]: sshd@82-10.0.0.7:22-10.0.0.1:39108.service: Deactivated successfully. Feb 13 20:46:13.982383 systemd[1]: session-83.scope: Deactivated successfully. Feb 13 20:46:13.982953 systemd-logind[1420]: Session 83 logged out. Waiting for processes to exit. Feb 13 20:46:13.983693 systemd-logind[1420]: Removed session 83. Feb 13 20:46:15.512748 kubelet[2508]: E0213 20:46:15.512710 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:18.629050 kubelet[2508]: E0213 20:46:18.629005 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:18.987321 systemd[1]: Started sshd@83-10.0.0.7:22-10.0.0.1:39112.service - OpenSSH per-connection server daemon (10.0.0.1:39112). Feb 13 20:46:19.021150 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 39112 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:19.022319 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:19.025550 systemd-logind[1420]: New session 84 of user core. Feb 13 20:46:19.034070 systemd[1]: Started session-84.scope - Session 84 of User core. Feb 13 20:46:19.136855 sshd[4041]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:19.139823 systemd[1]: sshd@83-10.0.0.7:22-10.0.0.1:39112.service: Deactivated successfully. Feb 13 20:46:19.142080 systemd[1]: session-84.scope: Deactivated successfully. Feb 13 20:46:19.143545 systemd-logind[1420]: Session 84 logged out. Waiting for processes to exit. Feb 13 20:46:19.144565 systemd-logind[1420]: Removed session 84. Feb 13 20:46:21.512655 kubelet[2508]: E0213 20:46:21.512607 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:23.630140 kubelet[2508]: E0213 20:46:23.630085 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:24.147362 systemd[1]: Started sshd@84-10.0.0.7:22-10.0.0.1:53154.service - OpenSSH per-connection server daemon (10.0.0.1:53154). Feb 13 20:46:24.181781 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 53154 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:24.182952 sshd[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:24.186814 systemd-logind[1420]: New session 85 of user core. Feb 13 20:46:24.193071 systemd[1]: Started session-85.scope - Session 85 of User core. Feb 13 20:46:24.296952 sshd[4055]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:24.300124 systemd[1]: sshd@84-10.0.0.7:22-10.0.0.1:53154.service: Deactivated successfully. Feb 13 20:46:24.302090 systemd[1]: session-85.scope: Deactivated successfully. Feb 13 20:46:24.302776 systemd-logind[1420]: Session 85 logged out. Waiting for processes to exit. Feb 13 20:46:24.303752 systemd-logind[1420]: Removed session 85. Feb 13 20:46:24.512350 kubelet[2508]: E0213 20:46:24.512244 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:24.513140 kubelet[2508]: E0213 20:46:24.513098 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:46:28.630942 kubelet[2508]: E0213 20:46:28.630906 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:29.307361 systemd[1]: Started sshd@85-10.0.0.7:22-10.0.0.1:53170.service - OpenSSH per-connection server daemon (10.0.0.1:53170). Feb 13 20:46:29.341844 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 53170 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:29.343025 sshd[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:29.346206 systemd-logind[1420]: New session 86 of user core. Feb 13 20:46:29.359060 systemd[1]: Started session-86.scope - Session 86 of User core. Feb 13 20:46:29.463394 sshd[4072]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:29.466583 systemd[1]: sshd@85-10.0.0.7:22-10.0.0.1:53170.service: Deactivated successfully. Feb 13 20:46:29.468183 systemd[1]: session-86.scope: Deactivated successfully. Feb 13 20:46:29.468799 systemd-logind[1420]: Session 86 logged out. Waiting for processes to exit. Feb 13 20:46:29.469576 systemd-logind[1420]: Removed session 86. Feb 13 20:46:33.632144 kubelet[2508]: E0213 20:46:33.632044 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:34.474367 systemd[1]: Started sshd@86-10.0.0.7:22-10.0.0.1:38572.service - OpenSSH per-connection server daemon (10.0.0.1:38572). Feb 13 20:46:34.508424 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 38572 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:34.509633 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:34.513833 systemd-logind[1420]: New session 87 of user core. Feb 13 20:46:34.524144 systemd[1]: Started session-87.scope - Session 87 of User core. Feb 13 20:46:34.628490 sshd[4086]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:34.631452 systemd-logind[1420]: Session 87 logged out. Waiting for processes to exit. Feb 13 20:46:34.631611 systemd[1]: sshd@86-10.0.0.7:22-10.0.0.1:38572.service: Deactivated successfully. Feb 13 20:46:34.632980 systemd[1]: session-87.scope: Deactivated successfully. Feb 13 20:46:34.634458 systemd-logind[1420]: Removed session 87. Feb 13 20:46:38.513650 kubelet[2508]: E0213 20:46:38.513464 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:38.514002 kubelet[2508]: E0213 20:46:38.513965 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:46:38.633573 kubelet[2508]: E0213 20:46:38.633520 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:39.641243 systemd[1]: Started sshd@87-10.0.0.7:22-10.0.0.1:38578.service - OpenSSH per-connection server daemon (10.0.0.1:38578). Feb 13 20:46:39.677550 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 38578 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:39.678706 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:39.683980 systemd-logind[1420]: New session 88 of user core. Feb 13 20:46:39.694068 systemd[1]: Started session-88.scope - Session 88 of User core. Feb 13 20:46:39.798679 sshd[4102]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:39.802828 systemd[1]: sshd@87-10.0.0.7:22-10.0.0.1:38578.service: Deactivated successfully. Feb 13 20:46:39.804436 systemd[1]: session-88.scope: Deactivated successfully. Feb 13 20:46:39.805646 systemd-logind[1420]: Session 88 logged out. Waiting for processes to exit. Feb 13 20:46:39.806555 systemd-logind[1420]: Removed session 88. Feb 13 20:46:43.634897 kubelet[2508]: E0213 20:46:43.634845 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:44.810408 systemd[1]: Started sshd@88-10.0.0.7:22-10.0.0.1:48926.service - OpenSSH per-connection server daemon (10.0.0.1:48926). Feb 13 20:46:44.844141 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 48926 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:44.845257 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:44.848982 systemd-logind[1420]: New session 89 of user core. Feb 13 20:46:44.855061 systemd[1]: Started session-89.scope - Session 89 of User core. Feb 13 20:46:44.961150 sshd[4116]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:44.964170 systemd[1]: sshd@88-10.0.0.7:22-10.0.0.1:48926.service: Deactivated successfully. Feb 13 20:46:44.966240 systemd[1]: session-89.scope: Deactivated successfully. Feb 13 20:46:44.966980 systemd-logind[1420]: Session 89 logged out. Waiting for processes to exit. Feb 13 20:46:44.968039 systemd-logind[1420]: Removed session 89. Feb 13 20:46:48.636130 kubelet[2508]: E0213 20:46:48.636091 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:49.970438 systemd[1]: Started sshd@89-10.0.0.7:22-10.0.0.1:48936.service - OpenSSH per-connection server daemon (10.0.0.1:48936). Feb 13 20:46:50.004821 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 48936 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:50.006019 sshd[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:50.009733 systemd-logind[1420]: New session 90 of user core. Feb 13 20:46:50.017073 systemd[1]: Started session-90.scope - Session 90 of User core. Feb 13 20:46:50.120064 sshd[4131]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:50.123176 systemd[1]: sshd@89-10.0.0.7:22-10.0.0.1:48936.service: Deactivated successfully. Feb 13 20:46:50.124876 systemd[1]: session-90.scope: Deactivated successfully. Feb 13 20:46:50.125506 systemd-logind[1420]: Session 90 logged out. Waiting for processes to exit. Feb 13 20:46:50.126792 systemd-logind[1420]: Removed session 90. Feb 13 20:46:53.512990 kubelet[2508]: E0213 20:46:53.512871 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:53.513866 kubelet[2508]: E0213 20:46:53.513497 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:46:53.637514 kubelet[2508]: E0213 20:46:53.637486 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:46:55.130664 systemd[1]: Started sshd@90-10.0.0.7:22-10.0.0.1:37390.service - OpenSSH per-connection server daemon (10.0.0.1:37390). Feb 13 20:46:55.165649 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 37390 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:46:55.166859 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:46:55.170495 systemd-logind[1420]: New session 91 of user core. Feb 13 20:46:55.184074 systemd[1]: Started session-91.scope - Session 91 of User core. Feb 13 20:46:55.288369 sshd[4147]: pam_unix(sshd:session): session closed for user core Feb 13 20:46:55.291639 systemd[1]: sshd@90-10.0.0.7:22-10.0.0.1:37390.service: Deactivated successfully. Feb 13 20:46:55.293327 systemd[1]: session-91.scope: Deactivated successfully. Feb 13 20:46:55.293891 systemd-logind[1420]: Session 91 logged out. Waiting for processes to exit. Feb 13 20:46:55.294728 systemd-logind[1420]: Removed session 91. Feb 13 20:46:58.513320 kubelet[2508]: E0213 20:46:58.513225 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:46:58.638467 kubelet[2508]: E0213 20:46:58.638436 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:00.299180 systemd[1]: Started sshd@91-10.0.0.7:22-10.0.0.1:37400.service - OpenSSH per-connection server daemon (10.0.0.1:37400). Feb 13 20:47:00.333080 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 37400 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:00.334258 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:00.337522 systemd-logind[1420]: New session 92 of user core. Feb 13 20:47:00.348081 systemd[1]: Started session-92.scope - Session 92 of User core. Feb 13 20:47:00.451266 sshd[4161]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:00.454481 systemd[1]: sshd@91-10.0.0.7:22-10.0.0.1:37400.service: Deactivated successfully. Feb 13 20:47:00.456796 systemd[1]: session-92.scope: Deactivated successfully. Feb 13 20:47:00.458464 systemd-logind[1420]: Session 92 logged out. Waiting for processes to exit. Feb 13 20:47:00.460117 systemd-logind[1420]: Removed session 92. Feb 13 20:47:03.639215 kubelet[2508]: E0213 20:47:03.639167 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:05.461400 systemd[1]: Started sshd@92-10.0.0.7:22-10.0.0.1:60596.service - OpenSSH per-connection server daemon (10.0.0.1:60596). Feb 13 20:47:05.496030 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 60596 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:05.497270 sshd[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:05.500547 systemd-logind[1420]: New session 93 of user core. Feb 13 20:47:05.507142 systemd[1]: Started session-93.scope - Session 93 of User core. Feb 13 20:47:05.614415 sshd[4175]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:05.617794 systemd[1]: sshd@92-10.0.0.7:22-10.0.0.1:60596.service: Deactivated successfully. Feb 13 20:47:05.619307 systemd[1]: session-93.scope: Deactivated successfully. Feb 13 20:47:05.620486 systemd-logind[1420]: Session 93 logged out. Waiting for processes to exit. Feb 13 20:47:05.621349 systemd-logind[1420]: Removed session 93. Feb 13 20:47:08.512514 kubelet[2508]: E0213 20:47:08.512482 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:08.513441 kubelet[2508]: E0213 20:47:08.513210 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:47:08.640242 kubelet[2508]: E0213 20:47:08.640209 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:10.624398 systemd[1]: Started sshd@93-10.0.0.7:22-10.0.0.1:60600.service - OpenSSH per-connection server daemon (10.0.0.1:60600). Feb 13 20:47:10.658465 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 60600 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:10.659691 sshd[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:10.663242 systemd-logind[1420]: New session 94 of user core. Feb 13 20:47:10.675065 systemd[1]: Started session-94.scope - Session 94 of User core. Feb 13 20:47:10.783192 sshd[4191]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:10.786687 systemd[1]: sshd@93-10.0.0.7:22-10.0.0.1:60600.service: Deactivated successfully. Feb 13 20:47:10.788207 systemd[1]: session-94.scope: Deactivated successfully. Feb 13 20:47:10.788766 systemd-logind[1420]: Session 94 logged out. Waiting for processes to exit. Feb 13 20:47:10.789603 systemd-logind[1420]: Removed session 94. Feb 13 20:47:13.640904 kubelet[2508]: E0213 20:47:13.640867 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:13.680015 update_engine[1429]: I20250213 20:47:13.678133 1429 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 20:47:13.680015 update_engine[1429]: I20250213 20:47:13.680013 1429 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 20:47:13.680270 update_engine[1429]: I20250213 20:47:13.680250 1429 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 20:47:13.681365 update_engine[1429]: I20250213 20:47:13.681315 1429 omaha_request_params.cc:62] Current group set to lts Feb 13 20:47:13.681430 update_engine[1429]: I20250213 20:47:13.681418 1429 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 20:47:13.681458 update_engine[1429]: I20250213 20:47:13.681428 1429 update_attempter.cc:643] Scheduling an action processor start. Feb 13 20:47:13.681458 update_engine[1429]: I20250213 20:47:13.681444 1429 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:47:13.681500 update_engine[1429]: I20250213 20:47:13.681471 1429 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 20:47:13.681584 update_engine[1429]: I20250213 20:47:13.681531 1429 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:47:13.681584 update_engine[1429]: I20250213 20:47:13.681572 1429 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Feb 13 20:47:13.681584 update_engine[1429]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Feb 13 20:47:13.681584 update_engine[1429]: <os version="Chateau" platform="CoreOS" sp="4081.3.1_aarch64"></os> Feb 13 20:47:13.681584 update_engine[1429]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4081.3.1" track="lts" bootid="{071702f8-a9c6-41ba-a189-e67ff9f11a4d}" oem="" oemversion="" alephversion="4081.3.1" machineid="f498f856d3f74f629d53ba2727f625c2" machinealias="" lang="en-US" board="arm64-usr" hardware_class="" delta_okay="false" > Feb 13 20:47:13.681584 update_engine[1429]: <ping active="1"></ping> Feb 13 20:47:13.681584 update_engine[1429]: <updatecheck></updatecheck> Feb 13 20:47:13.681584 update_engine[1429]: <event eventtype="3" eventresult="2" previousversion="0.0.0.0"></event> Feb 13 20:47:13.681584 update_engine[1429]: </app> Feb 13 20:47:13.681584 update_engine[1429]: </request> Feb 13 20:47:13.681584 update_engine[1429]: I20250213 20:47:13.681581 1429 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:47:13.681777 locksmithd[1459]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 20:47:13.682617 update_engine[1429]: I20250213 20:47:13.682579 1429 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:47:13.682852 update_engine[1429]: I20250213 20:47:13.682823 1429 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:47:13.686228 update_engine[1429]: E20250213 20:47:13.686194 1429 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:47:13.686271 update_engine[1429]: I20250213 20:47:13.686260 1429 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 20:47:15.793864 systemd[1]: Started sshd@94-10.0.0.7:22-10.0.0.1:40266.service - OpenSSH per-connection server daemon (10.0.0.1:40266). Feb 13 20:47:15.828428 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 40266 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:15.829595 sshd[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:15.834109 systemd-logind[1420]: New session 95 of user core. Feb 13 20:47:15.843072 systemd[1]: Started session-95.scope - Session 95 of User core. Feb 13 20:47:15.950173 sshd[4205]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:15.952746 systemd-logind[1420]: Session 95 logged out. Waiting for processes to exit. Feb 13 20:47:15.953620 systemd[1]: sshd@94-10.0.0.7:22-10.0.0.1:40266.service: Deactivated successfully. Feb 13 20:47:15.955195 systemd[1]: session-95.scope: Deactivated successfully. Feb 13 20:47:15.956475 systemd-logind[1420]: Removed session 95. Feb 13 20:47:18.642490 kubelet[2508]: E0213 20:47:18.642443 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:19.512868 kubelet[2508]: E0213 20:47:19.512826 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:19.513558 kubelet[2508]: E0213 20:47:19.513528 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:47:20.960370 systemd[1]: Started sshd@95-10.0.0.7:22-10.0.0.1:40282.service - OpenSSH per-connection server daemon (10.0.0.1:40282). Feb 13 20:47:20.994472 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 40282 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:20.995599 sshd[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:20.999319 systemd-logind[1420]: New session 96 of user core. Feb 13 20:47:21.007066 systemd[1]: Started session-96.scope - Session 96 of User core. Feb 13 20:47:21.110162 sshd[4221]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:21.113532 systemd[1]: sshd@95-10.0.0.7:22-10.0.0.1:40282.service: Deactivated successfully. Feb 13 20:47:21.115274 systemd[1]: session-96.scope: Deactivated successfully. Feb 13 20:47:21.115874 systemd-logind[1420]: Session 96 logged out. Waiting for processes to exit. Feb 13 20:47:21.116653 systemd-logind[1420]: Removed session 96. Feb 13 20:47:23.644252 kubelet[2508]: E0213 20:47:23.644219 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:23.678963 update_engine[1429]: I20250213 20:47:23.678876 1429 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:47:23.679263 update_engine[1429]: I20250213 20:47:23.679219 1429 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:47:23.679429 update_engine[1429]: I20250213 20:47:23.679384 1429 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:47:23.683568 update_engine[1429]: E20250213 20:47:23.683535 1429 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:47:23.683608 update_engine[1429]: I20250213 20:47:23.683591 1429 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 20:47:26.125277 systemd[1]: Started sshd@96-10.0.0.7:22-10.0.0.1:53514.service - OpenSSH per-connection server daemon (10.0.0.1:53514). Feb 13 20:47:26.159231 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 53514 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:26.160439 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:26.163990 systemd-logind[1420]: New session 97 of user core. Feb 13 20:47:26.177101 systemd[1]: Started session-97.scope - Session 97 of User core. Feb 13 20:47:26.280041 sshd[4237]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:26.283393 systemd[1]: sshd@96-10.0.0.7:22-10.0.0.1:53514.service: Deactivated successfully. Feb 13 20:47:26.285685 systemd[1]: session-97.scope: Deactivated successfully. Feb 13 20:47:26.286658 systemd-logind[1420]: Session 97 logged out. Waiting for processes to exit. Feb 13 20:47:26.287548 systemd-logind[1420]: Removed session 97. Feb 13 20:47:28.645787 kubelet[2508]: E0213 20:47:28.645743 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:30.512966 kubelet[2508]: E0213 20:47:30.512725 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:30.512966 kubelet[2508]: E0213 20:47:30.512839 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:30.514146 kubelet[2508]: E0213 20:47:30.514102 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:47:31.294667 systemd[1]: Started sshd@97-10.0.0.7:22-10.0.0.1:53526.service - OpenSSH per-connection server daemon (10.0.0.1:53526). Feb 13 20:47:31.328747 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 53526 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:31.329918 sshd[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:31.333724 systemd-logind[1420]: New session 98 of user core. Feb 13 20:47:31.343061 systemd[1]: Started session-98.scope - Session 98 of User core. Feb 13 20:47:31.449540 sshd[4252]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:31.452615 systemd[1]: sshd@97-10.0.0.7:22-10.0.0.1:53526.service: Deactivated successfully. Feb 13 20:47:31.454228 systemd[1]: session-98.scope: Deactivated successfully. Feb 13 20:47:31.455601 systemd-logind[1420]: Session 98 logged out. Waiting for processes to exit. Feb 13 20:47:31.456437 systemd-logind[1420]: Removed session 98. Feb 13 20:47:33.647431 kubelet[2508]: E0213 20:47:33.647383 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:33.677778 update_engine[1429]: I20250213 20:47:33.677284 1429 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:47:33.677778 update_engine[1429]: I20250213 20:47:33.677573 1429 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:47:33.677778 update_engine[1429]: I20250213 20:47:33.677736 1429 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:47:33.681256 update_engine[1429]: E20250213 20:47:33.681177 1429 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:47:33.681256 update_engine[1429]: I20250213 20:47:33.681233 1429 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 20:47:34.513432 kubelet[2508]: E0213 20:47:34.513383 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:36.459683 systemd[1]: Started sshd@98-10.0.0.7:22-10.0.0.1:51664.service - OpenSSH per-connection server daemon (10.0.0.1:51664). Feb 13 20:47:36.493560 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 51664 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:36.494751 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:36.498131 systemd-logind[1420]: New session 99 of user core. Feb 13 20:47:36.505063 systemd[1]: Started session-99.scope - Session 99 of User core. Feb 13 20:47:36.610263 sshd[4266]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:36.613337 systemd[1]: sshd@98-10.0.0.7:22-10.0.0.1:51664.service: Deactivated successfully. Feb 13 20:47:36.616702 systemd[1]: session-99.scope: Deactivated successfully. Feb 13 20:47:36.617452 systemd-logind[1420]: Session 99 logged out. Waiting for processes to exit. Feb 13 20:47:36.618250 systemd-logind[1420]: Removed session 99. Feb 13 20:47:38.648473 kubelet[2508]: E0213 20:47:38.648422 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:41.512526 kubelet[2508]: E0213 20:47:41.512492 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:41.513328 kubelet[2508]: E0213 20:47:41.513304 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:47:41.620434 systemd[1]: Started sshd@99-10.0.0.7:22-10.0.0.1:51668.service - OpenSSH per-connection server daemon (10.0.0.1:51668). Feb 13 20:47:41.654783 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 51668 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:41.656020 sshd[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:41.659627 systemd-logind[1420]: New session 100 of user core. Feb 13 20:47:41.667066 systemd[1]: Started session-100.scope - Session 100 of User core. Feb 13 20:47:41.773745 sshd[4282]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:41.777156 systemd[1]: sshd@99-10.0.0.7:22-10.0.0.1:51668.service: Deactivated successfully. Feb 13 20:47:41.778871 systemd[1]: session-100.scope: Deactivated successfully. Feb 13 20:47:41.781913 systemd-logind[1420]: Session 100 logged out. Waiting for processes to exit. Feb 13 20:47:41.782803 systemd-logind[1420]: Removed session 100. Feb 13 20:47:42.513035 kubelet[2508]: E0213 20:47:42.512958 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:43.650084 kubelet[2508]: E0213 20:47:43.650052 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:43.676483 update_engine[1429]: I20250213 20:47:43.676393 1429 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:47:43.676774 update_engine[1429]: I20250213 20:47:43.676717 1429 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:47:43.676915 update_engine[1429]: I20250213 20:47:43.676875 1429 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:47:43.681037 update_engine[1429]: E20250213 20:47:43.681000 1429 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:47:43.681154 update_engine[1429]: I20250213 20:47:43.681054 1429 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:47:43.681154 update_engine[1429]: I20250213 20:47:43.681064 1429 omaha_request_action.cc:617] Omaha request response: Feb 13 20:47:43.681154 update_engine[1429]: E20250213 20:47:43.681136 1429 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 20:47:43.681154 update_engine[1429]: I20250213 20:47:43.681152 1429 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 20:47:43.681261 update_engine[1429]: I20250213 20:47:43.681157 1429 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:47:43.681261 update_engine[1429]: I20250213 20:47:43.681162 1429 update_attempter.cc:306] Processing Done. Feb 13 20:47:43.681261 update_engine[1429]: E20250213 20:47:43.681175 1429 update_attempter.cc:619] Update failed. Feb 13 20:47:43.681261 update_engine[1429]: I20250213 20:47:43.681180 1429 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 20:47:43.681261 update_engine[1429]: I20250213 20:47:43.681185 1429 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 20:47:43.681261 update_engine[1429]: I20250213 20:47:43.681190 1429 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 20:47:43.681261 update_engine[1429]: I20250213 20:47:43.681252 1429 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 20:47:43.681393 update_engine[1429]: I20250213 20:47:43.681272 1429 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 20:47:43.681393 update_engine[1429]: I20250213 20:47:43.681278 1429 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Feb 13 20:47:43.681393 update_engine[1429]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Feb 13 20:47:43.681393 update_engine[1429]: <os version="Chateau" platform="CoreOS" sp="4081.3.1_aarch64"></os> Feb 13 20:47:43.681393 update_engine[1429]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4081.3.1" track="lts" bootid="{071702f8-a9c6-41ba-a189-e67ff9f11a4d}" oem="" oemversion="" alephversion="4081.3.1" machineid="f498f856d3f74f629d53ba2727f625c2" machinealias="" lang="en-US" board="arm64-usr" hardware_class="" delta_okay="false" > Feb 13 20:47:43.681393 update_engine[1429]: <event eventtype="3" eventresult="0" errorcode="268437456"></event> Feb 13 20:47:43.681393 update_engine[1429]: </app> Feb 13 20:47:43.681393 update_engine[1429]: </request> Feb 13 20:47:43.681393 update_engine[1429]: I20250213 20:47:43.681283 1429 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 20:47:43.681555 update_engine[1429]: I20250213 20:47:43.681416 1429 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 20:47:43.681725 update_engine[1429]: I20250213 20:47:43.681536 1429 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 20:47:43.681884 locksmithd[1459]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 20:47:43.686855 update_engine[1429]: E20250213 20:47:43.686808 1429 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 20:47:43.686855 update_engine[1429]: I20250213 20:47:43.686857 1429 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 20:47:43.687132 update_engine[1429]: I20250213 20:47:43.686866 1429 omaha_request_action.cc:617] Omaha request response: Feb 13 20:47:43.687132 update_engine[1429]: I20250213 20:47:43.686872 1429 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:47:43.687132 update_engine[1429]: I20250213 20:47:43.686877 1429 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 20:47:43.687132 update_engine[1429]: I20250213 20:47:43.686882 1429 update_attempter.cc:306] Processing Done. Feb 13 20:47:43.687132 update_engine[1429]: I20250213 20:47:43.686885 1429 update_attempter.cc:310] Error event sent. Feb 13 20:47:43.687132 update_engine[1429]: I20250213 20:47:43.686893 1429 update_check_scheduler.cc:74] Next update check in 48m24s Feb 13 20:47:43.687254 locksmithd[1459]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 20:47:46.784588 systemd[1]: Started sshd@100-10.0.0.7:22-10.0.0.1:56872.service - OpenSSH per-connection server daemon (10.0.0.1:56872). Feb 13 20:47:46.818874 sshd[4297]: Accepted publickey for core from 10.0.0.1 port 56872 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:46.820079 sshd[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:46.823323 systemd-logind[1420]: New session 101 of user core. Feb 13 20:47:46.844079 systemd[1]: Started session-101.scope - Session 101 of User core. Feb 13 20:47:46.960626 sshd[4297]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:46.963183 systemd[1]: sshd@100-10.0.0.7:22-10.0.0.1:56872.service: Deactivated successfully. Feb 13 20:47:46.965132 systemd[1]: session-101.scope: Deactivated successfully. Feb 13 20:47:46.966638 systemd-logind[1420]: Session 101 logged out. Waiting for processes to exit. Feb 13 20:47:46.967633 systemd-logind[1420]: Removed session 101. Feb 13 20:47:48.651303 kubelet[2508]: E0213 20:47:48.651258 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:51.971543 systemd[1]: Started sshd@101-10.0.0.7:22-10.0.0.1:56884.service - OpenSSH per-connection server daemon (10.0.0.1:56884). Feb 13 20:47:52.006015 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 56884 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:52.007389 sshd[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:52.011061 systemd-logind[1420]: New session 102 of user core. Feb 13 20:47:52.022074 systemd[1]: Started session-102.scope - Session 102 of User core. Feb 13 20:47:52.124980 sshd[4311]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:52.128208 systemd[1]: sshd@101-10.0.0.7:22-10.0.0.1:56884.service: Deactivated successfully. Feb 13 20:47:52.130301 systemd[1]: session-102.scope: Deactivated successfully. Feb 13 20:47:52.131061 systemd-logind[1420]: Session 102 logged out. Waiting for processes to exit. Feb 13 20:47:52.131967 systemd-logind[1420]: Removed session 102. Feb 13 20:47:53.652401 kubelet[2508]: E0213 20:47:53.652343 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:47:54.513081 kubelet[2508]: E0213 20:47:54.512824 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:47:54.513454 kubelet[2508]: E0213 20:47:54.513375 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:47:57.135349 systemd[1]: Started sshd@102-10.0.0.7:22-10.0.0.1:38822.service - OpenSSH per-connection server daemon (10.0.0.1:38822). Feb 13 20:47:57.169561 sshd[4328]: Accepted publickey for core from 10.0.0.1 port 38822 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:47:57.171057 sshd[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:47:57.174775 systemd-logind[1420]: New session 103 of user core. Feb 13 20:47:57.184063 systemd[1]: Started session-103.scope - Session 103 of User core. Feb 13 20:47:57.286749 sshd[4328]: pam_unix(sshd:session): session closed for user core Feb 13 20:47:57.290052 systemd[1]: sshd@102-10.0.0.7:22-10.0.0.1:38822.service: Deactivated successfully. Feb 13 20:47:57.292841 systemd[1]: session-103.scope: Deactivated successfully. Feb 13 20:47:57.293874 systemd-logind[1420]: Session 103 logged out. Waiting for processes to exit. Feb 13 20:47:57.295270 systemd-logind[1420]: Removed session 103. Feb 13 20:47:58.653295 kubelet[2508]: E0213 20:47:58.653247 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:02.296661 systemd[1]: Started sshd@103-10.0.0.7:22-10.0.0.1:38824.service - OpenSSH per-connection server daemon (10.0.0.1:38824). Feb 13 20:48:02.330720 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 38824 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:02.331884 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:02.335187 systemd-logind[1420]: New session 104 of user core. Feb 13 20:48:02.344116 systemd[1]: Started session-104.scope - Session 104 of User core. Feb 13 20:48:02.449925 sshd[4343]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:02.453120 systemd[1]: sshd@103-10.0.0.7:22-10.0.0.1:38824.service: Deactivated successfully. Feb 13 20:48:02.455591 systemd[1]: session-104.scope: Deactivated successfully. Feb 13 20:48:02.456447 systemd-logind[1420]: Session 104 logged out. Waiting for processes to exit. Feb 13 20:48:02.457416 systemd-logind[1420]: Removed session 104. Feb 13 20:48:03.513227 kubelet[2508]: E0213 20:48:03.513187 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:03.654469 kubelet[2508]: E0213 20:48:03.654425 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:07.460693 systemd[1]: Started sshd@104-10.0.0.7:22-10.0.0.1:33024.service - OpenSSH per-connection server daemon (10.0.0.1:33024). Feb 13 20:48:07.494842 sshd[4357]: Accepted publickey for core from 10.0.0.1 port 33024 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:07.496045 sshd[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:07.499782 systemd-logind[1420]: New session 105 of user core. Feb 13 20:48:07.511082 systemd[1]: Started session-105.scope - Session 105 of User core. Feb 13 20:48:07.512221 kubelet[2508]: E0213 20:48:07.512091 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:07.513107 kubelet[2508]: E0213 20:48:07.512718 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:48:07.615720 sshd[4357]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:07.619010 systemd[1]: sshd@104-10.0.0.7:22-10.0.0.1:33024.service: Deactivated successfully. Feb 13 20:48:07.620706 systemd[1]: session-105.scope: Deactivated successfully. Feb 13 20:48:07.622092 systemd-logind[1420]: Session 105 logged out. Waiting for processes to exit. Feb 13 20:48:07.622916 systemd-logind[1420]: Removed session 105. Feb 13 20:48:08.655624 kubelet[2508]: E0213 20:48:08.655579 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:12.630475 systemd[1]: Started sshd@105-10.0.0.7:22-10.0.0.1:36264.service - OpenSSH per-connection server daemon (10.0.0.1:36264). Feb 13 20:48:12.664690 sshd[4371]: Accepted publickey for core from 10.0.0.1 port 36264 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:12.665976 sshd[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:12.669344 systemd-logind[1420]: New session 106 of user core. Feb 13 20:48:12.680211 systemd[1]: Started session-106.scope - Session 106 of User core. Feb 13 20:48:12.784245 sshd[4371]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:12.787459 systemd[1]: sshd@105-10.0.0.7:22-10.0.0.1:36264.service: Deactivated successfully. Feb 13 20:48:12.789818 systemd[1]: session-106.scope: Deactivated successfully. Feb 13 20:48:12.790898 systemd-logind[1420]: Session 106 logged out. Waiting for processes to exit. Feb 13 20:48:12.792215 systemd-logind[1420]: Removed session 106. Feb 13 20:48:13.657129 kubelet[2508]: E0213 20:48:13.657075 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:17.795719 systemd[1]: Started sshd@106-10.0.0.7:22-10.0.0.1:36278.service - OpenSSH per-connection server daemon (10.0.0.1:36278). Feb 13 20:48:17.830547 sshd[4385]: Accepted publickey for core from 10.0.0.1 port 36278 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:17.831743 sshd[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:17.835635 systemd-logind[1420]: New session 107 of user core. Feb 13 20:48:17.844059 systemd[1]: Started session-107.scope - Session 107 of User core. Feb 13 20:48:17.947826 sshd[4385]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:17.950985 systemd[1]: sshd@106-10.0.0.7:22-10.0.0.1:36278.service: Deactivated successfully. Feb 13 20:48:17.952525 systemd[1]: session-107.scope: Deactivated successfully. Feb 13 20:48:17.954382 systemd-logind[1420]: Session 107 logged out. Waiting for processes to exit. Feb 13 20:48:17.955451 systemd-logind[1420]: Removed session 107. Feb 13 20:48:18.512418 kubelet[2508]: E0213 20:48:18.512379 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:18.512418 kubelet[2508]: E0213 20:48:18.512999 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:48:18.658694 kubelet[2508]: E0213 20:48:18.658650 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:22.958391 systemd[1]: Started sshd@107-10.0.0.7:22-10.0.0.1:50562.service - OpenSSH per-connection server daemon (10.0.0.1:50562). Feb 13 20:48:22.992625 sshd[4399]: Accepted publickey for core from 10.0.0.1 port 50562 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:22.993811 sshd[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:22.997562 systemd-logind[1420]: New session 108 of user core. Feb 13 20:48:23.008053 systemd[1]: Started session-108.scope - Session 108 of User core. Feb 13 20:48:23.108589 sshd[4399]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:23.111622 systemd[1]: sshd@107-10.0.0.7:22-10.0.0.1:50562.service: Deactivated successfully. Feb 13 20:48:23.113181 systemd[1]: session-108.scope: Deactivated successfully. Feb 13 20:48:23.114628 systemd-logind[1420]: Session 108 logged out. Waiting for processes to exit. Feb 13 20:48:23.115456 systemd-logind[1420]: Removed session 108. Feb 13 20:48:23.660032 kubelet[2508]: E0213 20:48:23.659995 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:28.122499 systemd[1]: Started sshd@108-10.0.0.7:22-10.0.0.1:50568.service - OpenSSH per-connection server daemon (10.0.0.1:50568). Feb 13 20:48:28.156595 sshd[4415]: Accepted publickey for core from 10.0.0.1 port 50568 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:28.157833 sshd[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:28.161414 systemd-logind[1420]: New session 109 of user core. Feb 13 20:48:28.173125 systemd[1]: Started session-109.scope - Session 109 of User core. Feb 13 20:48:28.274532 sshd[4415]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:28.277675 systemd[1]: sshd@108-10.0.0.7:22-10.0.0.1:50568.service: Deactivated successfully. Feb 13 20:48:28.279311 systemd[1]: session-109.scope: Deactivated successfully. Feb 13 20:48:28.280375 systemd-logind[1420]: Session 109 logged out. Waiting for processes to exit. Feb 13 20:48:28.281141 systemd-logind[1420]: Removed session 109. Feb 13 20:48:28.661061 kubelet[2508]: E0213 20:48:28.661019 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:31.512186 kubelet[2508]: E0213 20:48:31.512145 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:31.513066 kubelet[2508]: E0213 20:48:31.512788 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:48:33.284464 systemd[1]: Started sshd@109-10.0.0.7:22-10.0.0.1:43624.service - OpenSSH per-connection server daemon (10.0.0.1:43624). Feb 13 20:48:33.318683 sshd[4429]: Accepted publickey for core from 10.0.0.1 port 43624 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:33.319815 sshd[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:33.323021 systemd-logind[1420]: New session 110 of user core. Feb 13 20:48:33.328065 systemd[1]: Started session-110.scope - Session 110 of User core. Feb 13 20:48:33.428750 sshd[4429]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:33.432237 systemd[1]: sshd@109-10.0.0.7:22-10.0.0.1:43624.service: Deactivated successfully. Feb 13 20:48:33.433773 systemd[1]: session-110.scope: Deactivated successfully. Feb 13 20:48:33.434981 systemd-logind[1420]: Session 110 logged out. Waiting for processes to exit. Feb 13 20:48:33.435953 systemd-logind[1420]: Removed session 110. Feb 13 20:48:33.661697 kubelet[2508]: E0213 20:48:33.661664 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:38.441485 systemd[1]: Started sshd@110-10.0.0.7:22-10.0.0.1:43634.service - OpenSSH per-connection server daemon (10.0.0.1:43634). Feb 13 20:48:38.476949 sshd[4443]: Accepted publickey for core from 10.0.0.1 port 43634 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:38.478132 sshd[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:38.482447 systemd-logind[1420]: New session 111 of user core. Feb 13 20:48:38.494063 systemd[1]: Started session-111.scope - Session 111 of User core. Feb 13 20:48:38.597948 sshd[4443]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:38.601738 systemd[1]: sshd@110-10.0.0.7:22-10.0.0.1:43634.service: Deactivated successfully. Feb 13 20:48:38.603145 systemd[1]: session-111.scope: Deactivated successfully. Feb 13 20:48:38.604651 systemd-logind[1420]: Session 111 logged out. Waiting for processes to exit. Feb 13 20:48:38.605685 systemd-logind[1420]: Removed session 111. Feb 13 20:48:38.662573 kubelet[2508]: E0213 20:48:38.662543 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:43.513022 kubelet[2508]: E0213 20:48:43.512987 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:43.513387 kubelet[2508]: E0213 20:48:43.512992 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:43.513886 kubelet[2508]: E0213 20:48:43.513861 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:48:43.608740 systemd[1]: Started sshd@111-10.0.0.7:22-10.0.0.1:44422.service - OpenSSH per-connection server daemon (10.0.0.1:44422). Feb 13 20:48:43.642746 sshd[4459]: Accepted publickey for core from 10.0.0.1 port 44422 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:43.643965 sshd[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:43.647866 systemd-logind[1420]: New session 112 of user core. Feb 13 20:48:43.663359 kubelet[2508]: E0213 20:48:43.663330 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:43.665074 systemd[1]: Started session-112.scope - Session 112 of User core. Feb 13 20:48:43.765047 sshd[4459]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:43.768844 systemd[1]: sshd@111-10.0.0.7:22-10.0.0.1:44422.service: Deactivated successfully. Feb 13 20:48:43.771090 systemd[1]: session-112.scope: Deactivated successfully. Feb 13 20:48:43.771871 systemd-logind[1420]: Session 112 logged out. Waiting for processes to exit. Feb 13 20:48:43.772796 systemd-logind[1420]: Removed session 112. Feb 13 20:48:48.664603 kubelet[2508]: E0213 20:48:48.664538 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:48.775452 systemd[1]: Started sshd@112-10.0.0.7:22-10.0.0.1:44430.service - OpenSSH per-connection server daemon (10.0.0.1:44430). Feb 13 20:48:48.809549 sshd[4473]: Accepted publickey for core from 10.0.0.1 port 44430 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:48.810682 sshd[4473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:48.814589 systemd-logind[1420]: New session 113 of user core. Feb 13 20:48:48.823071 systemd[1]: Started session-113.scope - Session 113 of User core. Feb 13 20:48:48.925886 sshd[4473]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:48.929276 systemd[1]: sshd@112-10.0.0.7:22-10.0.0.1:44430.service: Deactivated successfully. Feb 13 20:48:48.931443 systemd[1]: session-113.scope: Deactivated successfully. Feb 13 20:48:48.932231 systemd-logind[1420]: Session 113 logged out. Waiting for processes to exit. Feb 13 20:48:48.933047 systemd-logind[1420]: Removed session 113. Feb 13 20:48:50.513442 kubelet[2508]: E0213 20:48:50.513351 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:53.666228 kubelet[2508]: E0213 20:48:53.666169 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:53.936521 systemd[1]: Started sshd@113-10.0.0.7:22-10.0.0.1:46902.service - OpenSSH per-connection server daemon (10.0.0.1:46902). Feb 13 20:48:53.970904 sshd[4488]: Accepted publickey for core from 10.0.0.1 port 46902 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:53.972113 sshd[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:53.975503 systemd-logind[1420]: New session 114 of user core. Feb 13 20:48:53.985069 systemd[1]: Started session-114.scope - Session 114 of User core. Feb 13 20:48:54.087862 sshd[4488]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:54.091340 systemd[1]: sshd@113-10.0.0.7:22-10.0.0.1:46902.service: Deactivated successfully. Feb 13 20:48:54.093888 systemd[1]: session-114.scope: Deactivated successfully. Feb 13 20:48:54.094990 systemd-logind[1420]: Session 114 logged out. Waiting for processes to exit. Feb 13 20:48:54.095818 systemd-logind[1420]: Removed session 114. Feb 13 20:48:54.512236 kubelet[2508]: E0213 20:48:54.512198 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:55.512427 kubelet[2508]: E0213 20:48:55.512387 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:48:55.513000 kubelet[2508]: E0213 20:48:55.512961 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:48:58.667748 kubelet[2508]: E0213 20:48:58.667693 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:48:59.099795 systemd[1]: Started sshd@114-10.0.0.7:22-10.0.0.1:46912.service - OpenSSH per-connection server daemon (10.0.0.1:46912). Feb 13 20:48:59.133754 sshd[4504]: Accepted publickey for core from 10.0.0.1 port 46912 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:48:59.134910 sshd[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:48:59.138109 systemd-logind[1420]: New session 115 of user core. Feb 13 20:48:59.145075 systemd[1]: Started session-115.scope - Session 115 of User core. Feb 13 20:48:59.246028 sshd[4504]: pam_unix(sshd:session): session closed for user core Feb 13 20:48:59.249186 systemd[1]: sshd@114-10.0.0.7:22-10.0.0.1:46912.service: Deactivated successfully. Feb 13 20:48:59.250823 systemd[1]: session-115.scope: Deactivated successfully. Feb 13 20:48:59.252357 systemd-logind[1420]: Session 115 logged out. Waiting for processes to exit. Feb 13 20:48:59.253651 systemd-logind[1420]: Removed session 115. Feb 13 20:49:03.669330 kubelet[2508]: E0213 20:49:03.669243 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:49:04.256648 systemd[1]: Started sshd@115-10.0.0.7:22-10.0.0.1:37904.service - OpenSSH per-connection server daemon (10.0.0.1:37904). Feb 13 20:49:04.291824 sshd[4519]: Accepted publickey for core from 10.0.0.1 port 37904 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:49:04.293041 sshd[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:49:04.296495 systemd-logind[1420]: New session 116 of user core. Feb 13 20:49:04.313075 systemd[1]: Started session-116.scope - Session 116 of User core. Feb 13 20:49:04.417633 sshd[4519]: pam_unix(sshd:session): session closed for user core Feb 13 20:49:04.420978 systemd[1]: sshd@115-10.0.0.7:22-10.0.0.1:37904.service: Deactivated successfully. Feb 13 20:49:04.422567 systemd[1]: session-116.scope: Deactivated successfully. Feb 13 20:49:04.423655 systemd-logind[1420]: Session 116 logged out. Waiting for processes to exit. Feb 13 20:49:04.424524 systemd-logind[1420]: Removed session 116. Feb 13 20:49:08.670395 kubelet[2508]: E0213 20:49:08.670343 2508 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:49:09.428494 systemd[1]: Started sshd@116-10.0.0.7:22-10.0.0.1:37910.service - OpenSSH per-connection server daemon (10.0.0.1:37910). Feb 13 20:49:09.463671 sshd[4533]: Accepted publickey for core from 10.0.0.1 port 37910 ssh2: RSA SHA256:ijJ+MGEzuaViQlbMobaPQLogVrdZNUxIS0COzLzMHAQ Feb 13 20:49:09.464865 sshd[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:49:09.468345 systemd-logind[1420]: New session 117 of user core. Feb 13 20:49:09.482065 systemd[1]: Started session-117.scope - Session 117 of User core. Feb 13 20:49:09.512049 kubelet[2508]: E0213 20:49:09.512025 2508 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:49:09.512609 kubelet[2508]: E0213 20:49:09.512575 2508 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"install-cni-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/flannel/flannel-cni-plugin:v1.1.2\\\"\"" pod="kube-flannel/kube-flannel-ds-q7x5h" podUID="fba936b1-e9bf-4b9d-8ced-8a880a98539b" Feb 13 20:49:09.583679 sshd[4533]: pam_unix(sshd:session): session closed for user core Feb 13 20:49:09.586829 systemd[1]: sshd@116-10.0.0.7:22-10.0.0.1:37910.service: Deactivated successfully. Feb 13 20:49:09.588766 systemd[1]: session-117.scope: Deactivated successfully. Feb 13 20:49:09.589369 systemd-logind[1420]: Session 117 logged out. Waiting for processes to exit. Feb 13 20:49:09.590509 systemd-logind[1420]: Removed session 117.